Tutorial: Validating designs in digital twin simulations with clones
A step-by-step guide to building an example solution that validates a design in its digital twin through clone simulation
This tutorial explores an example solution powered by Virtue's framework that simulates real human experiences, particularly focusing on interactions with various objects and the display of emotions of clones (Virtual Humans in a 3D environment).
During the simulation, VHs (powered by psychological profiles cloned from real humans) participate in an experiential scenario, which is a virtual recreation of a real experience. This enables you to gain insights about how multiple variables could impact humans psychologically in the real-life version of the experience.
Business case
A company that designs home interiors wants to validate design options with their customers in Virtual Reality, so that they can design and deliver the best possible room designs.
Challenges
Designing home interiors has the following challenges:
Once a room's fit-out work is completed, making changes or rebuilding is expensive and impractical.
As a wide variety of design choices exist, the design process can be overwhelming for both the company and customers.
Couples, such as a husband and wife, may have conflicting preferences, complicating the decision-making process.
Designing and implementing a Clone Simulation solution
Leveraging Affective Computing by Virtue's framework when implementing your solution for this business case is an excellent choice.
You can take advantage of features such as simulation to recreate various scenarios and observe their impact on emotionally-driven digital characters (Virtual Humans or VHs), thereby streamlining the process of validating the experience.
Overview
An Affective Computing by Virtue-powered solution that aims at personalizing end-user experiences through clone simulation typically consists of the following phases:
Diagnostics and calibration
Diagnostics is the process in which EDAA™, the underlying framework of Affective Computing by Virtue, analyzes and establishes a preliminary (baseline) psychological profile for each user.
Calibration is the process of validating and adjusting each user's previously-established psychological profile based on their current psychological state.
For detailed information, see Diagnostics and calibration.
Both these processes are important parts of data collection, which is the first phase of implementing the solution. Data collection provides your solution with baselines, context, and boundaries.
To collect this initial data, you can either run a real experience or input synthetic data from an external data set.
However, as this solution aims to provide highly-personalized interior design to customers based on their preferences (which can only be achieved satisfactorily if the initial data is obtained directly from the clients), we recommend running the event of raw data collection.
Raw data collection
Before simulating an experience, Affective Computing by Virtue must be made aware of the boundaries of each situation. In addition to better solution performance, doing this prevents AI hallucinations.
The event of raw data collection (RDC) can help you achieve this. RDC simply means running the experience provided by your solution with real human users.
This process introduces the reality of the experience and provides context when situations similar to the ones that the real human users face are simulated using Virtual Humans (VHs).
To learn more about RDC, see Importance of raw data collection for simulation.
Simulation
Simulation enables you to replicate the situations that the real human users face and observe their effects over a custom period of time. This is the next phase of implementing the solution.
Simulation means running an experience in a virtual environment using VHs, whose psychological profiles are cloned or augmented from that of the real human users.
Clone simulation
A type of experience simulation in which each VH has a psychological profile that matches that of a specific real human end-user. This tutorial aims to help you understand how to implement clone simulation.
Bulk simulation
A type of experience simulation in which VHs are not 1:1 clones of real human end-users, but instead based on augmented data from real users’ psychological profiles.
For more information, see Simulation.
Result analysis
In the final phase of implementing the solution, you can analyze user engagement levels in each interaction session to gain data insights and visibility into the solution's performance.
Additionally, depending on the specifics of your solution's requirements, design, and definition of success, you can integrate external data analytics tools to visualize the results, generate reports, and gain the specific insights you require.
Implementing the solution
The process of implementing the solution consists of the following steps:
Working with our APIs
As an internal user responsible for setting up and managing an Affective Computing by Virtue project, you must use the API key and access token that enables you to integrate your front end with Orchestra to work with our APIs.
For more information, see Authenticating your data source to use our APIs.
To separate data collection and simulation, you must create two projects; one for each purpose, respectively. The simulation project will use the data collected during RDC, as both the projects are linked.
Step 1: Create the RDC project
You can create a project for raw data collection (RDC) by following the procedure described in Creating and managing projects. You can use this project to run multiple RDC sessions.
Example: Creating a project
The following example illustrates how to create a project for the purpose of RDC using the Project/Create API:
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project.
Request sample
{
"projectName":"Validating Designs - RDC",
"project_Domain_Id":4, // 4 -> Digital user experience
"project_Function_Id":13, //13 -> Neuro-architecture
"project_Purpose":"Data_Collection", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser":false,
"productCalibrationStatus":false,
"interactionSetups":[
{
"interaction_setup_id":2,
"interaction_mode":"diagnostics" // Configuring the data input for Heart Rate for diagnostics
},
{
"interaction_setup_id":3,
"interaction_mode":"diagnostics" // Configuring the data input for Speech for diagnostics
},
{
"interaction_setup_id":2,
"interaction_mode":"action" // Configuring the data input for Heart Rate for action mode
},
{
"interaction_setup_id":3, // Configuring the data input for Speech for action mode
"interaction_mode":"action"
},
{
"interaction_setup_id":26, // Configuring the data input for external signals for action mode
"interaction_mode":"action"
},
{
"interaction_setup_id":2, // Configuring the data input for Heart Rate for UC mode
"interaction_mode":"user_calibration"
},
{
"interaction_setup_id":3, // Configuring the data input for Speech for UC mode
"interaction_mode":"user_calibration"
},
{
"interaction_setup_id":26, // Configuring the data input for external signals for UC mode
"interaction_mode":"user_calibration"
}
]
}Response sample
{
"result": {
"projectName": "Validating Designs - RDC",
"duplicated_Project_Id": null,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 13,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": true,
"isRecurringUser": false,
"project_Purpose": "Data_Collection",
"relatesTo_Project_Id": null,
"project_Status": "draft",
"id": 1234
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Step 2: Parameterize the RDC project
After creating the RDC project, you can parameterize it to customize the behavior of Affective Computing by Virtue according to the requirements of your solution.
For detailed information about parameterization, see Parameterizing a project.
During this step, you can do the following:
Parameterize data inputs
You must declare (define) data inputs for each type of (raw or synthetic) external data you want to utilize in your solution. You can do this at the same time as creating the project.
For this project, you can configure the following data inputs:
Real human user
Physiological
Heart rate, through a wearable heart rate sensor
Speech, through a microphone
User motion
Conversation initiation or trigger and answers (Time to answer, number of words, pauses between words, duration of the answer, meaning, and more)
Interactions with a front-end of an application depending on how the experiences is designed
For more information, see Understanding data inputs.
Example: Configuring data inputs
The following example illustrates how to declare data inputs using the interactionsSetups parameter of the Project/Create API.
This parameter enables you to declare inputs and their respective operational modes.
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project. Also see Viewing available data inputs (interaction setups).
Request sample
{
"projectName":"Validating Designs - RDC",
"project_Domain_Id":4, // 4 -> Digital user experience
"project_Function_Id":13, //13 -> Neuro-architecture
"project_Purpose":"Data_Collection", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser":false,
"productCalibrationStatus":false,
"interactionSetups":[
{
"interaction_setup_id":2,
"interaction_mode":"diagnostics" // Configuring the data input for Heart Rate for diagnostics
},
{
"interaction_setup_id":3,
"interaction_mode":"diagnostics" // Configuring the data input for Speech for diagnostics
},
{
"interaction_setup_id":2,
"interaction_mode":"action" // Configuring the data input for Heart Rate for action mode
},
{
"interaction_setup_id":3, // Configuring the data input for Speech for action mode
"interaction_mode":"action"
},
{
"interaction_setup_id":26, // Configuring the data input for external signals for action mode
"interaction_mode":"action"
},
{
"interaction_setup_id":2, // Configuring the data input for Heart Rate for UC mode
"interaction_mode":"user_calibration"
},
{
"interaction_setup_id":3, // Configuring the data input for Speech for UC mode
"interaction_mode":"user_calibration"
},
{
"interaction_setup_id":26, // Configuring the data input for external signals for UC mode
"interaction_mode":"user_calibration"
}
]
}Response sample
{
"result": {
"projectName": "Validating Designs - RDC",
"duplicated_Project_Id": null,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 13,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": true,
"isRecurringUser": false,
"project_Purpose": "Data_Collection",
"relatesTo_Project_Id": null,
"project_Status": "draft",
"id": 1234
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize entities and objects
Affective Computing by Virtue supports the following types of entities:
Environment
The 3D environment where the real or virtual human can users interact with the solution.
Virtual Humans (VHs)
Emotionally-driven NPCs (powered by EDAA™) that interact with the solution during simulation.
The environment entity represents the environment in which users would participate in the experience.
For this solution, you require only one environment for both RDC and simulation; real humans can undergo the experience first (during RDC) to establish the context and later, the simulation can also take place in the same environment.
When parameterizing the RDC project, as the participants are real human users, you do not need to declare VH entities.
Objects are the items in the environment with which your (real or virtual) end users can interact. They can serve as a reference for data visualization or as channels to reflect status updates.
For your solution, you can declare all the objects that you want to include in the interior design, such as walls, surfaces, sinks, faucets, light fixtures, panels, and more.
For more information, see:
Example: Creating an entity
The following example illustrates how to create an environment entity using the DestinationEntity/CreateEntities API:
API
POST {{Server_URL}}/api/services/app/DestinationEntity/CreateEntities
To learn more about the API, see Create an entity.
Request sample
{
"destination_Entity_Types_Id": 2, // 2 => 3D Environment
"projectId": {{data_collection_projectID}}, // The project ID of the Data_Collection (RDC) project
"entityList":
[
{
"name": "3BHKHome1",
"description": "3 BHK apartment prototype - environment to be used for data collection”,
"identifier": "ENV_3BHKO1"
}
]
}Response sample
{
"result": [
139979
],
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating an object
The following example illustrates how to create an object using the DestinationObject/CreateObjects API:
API
POST {{Server_URL}}/api/services/app/DestinationObject/CreateObjects
To learn more about the API, see Create an object.
Request sample
{
"destination_Entity_Environment_Id": 139979, // The environment in which the object must be created
"destinationObjectList": [
{
"name": "Room 1 Wall Panel Wood",
"description": "Wooden wall panel in room 1",
"identifier": "room1_wall_panel_wood"
}]
}Response sample
{
"result": [
663
],
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize actions and attributes
Affective Computing by Virtue supports the following categories of actions:
Content
Delivering any media, such as images, video, or sound.
Interactions
Delivering statements or asking questions
Triggered actions
Delivering an action as a response to specific events or conditions.
As your solution aims to personalize interior design based on user preferences, you can define changes to objects based on all possible variations you can offer as triggered actions. For example, if you can provide 15 different light fixture options, kitchen platforms of 10 different materials, and 5 different wall paper designs, you must declare all combinations as triggered actions.
As your solution also relies on EDAA™ to ask questions to the participants, you must declare the questions as interactions and design them based on what you are trying to validate. For example, you can design a question to ask users how they feel about a particular color palette.
We also recommend thoughtfully planning who would ask the questions.
As humans wouldn't typically encounter questions from an unseen voice, you could introduce an object in your solution that is implemented as an NPC in Unreal Engine; during the experience, an NPC would approach a user and ask them the question.
Attributes personalize interactions between your solution and end-users by shaping and classifying actions. They can be considered as "folders" that group related or similar actions.
For example, an attribute called boho_design can contain the actions that must be delivered to implement a bohemian vibe in the interior design.
Similar to actions, attributes are also categorized as content attributes, interaction attributes, and triggered action attributes.
You must group your actions under appropriate attributes of the appropriate category. For example, all questions posed to users in Room 1 can be grouped under an interaction attribute called room1_questions.
For more information, see:
Example: Creating an action
The following example illustrates how to create a triggered action using the FeedingData/Create API. The action in this example is designed to change the light fixture in Room 1 to a jute chandelier:
API
POST {{Server_URL}}/api/services/app/v2/FeedingData/Create
To learn more about the API, see Add an action.
Request sample
{
"projectId": {{projectID}}, // The project ID of the project
"identity": "room1_light_fixture_jute_chandelier",
"feeding_Value": "room1_light_fixture_jute_chandelier",
"feeding_Action_Category_ID": 3, // 3 => Triggered action (The action category)
"feeding_Action_Type_ID": 18, // 18 => Value (The action type)
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true
}Response sample
{
"result": {
"projectId": 742,
"identity": "room1_light_fixture_jute_chandelier",
"feeding_Value": "room1_light_fixture_jute_chandelier",
"feeding_Action_Type_ID": 18,
"feeding_Action_Category_ID": 3,
"isImported": false,
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true,
"id": 127544
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating an attribute
The following example illustrates how to create a triggered action attribute (to group triggered actions) using the FeedingTriggeredActionAttribute/Create API. In this example, the attribute groups the actions triggered in the living room of the apartment:
API
POST {{Server_URL}}/api/services/app/FeedingTriggeredActionAttribute/Create
To learn more about the API, see Create a triggered action attribute.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the project
"name": "living_room",
"action_Type_ID": 18, // 18 => Value (The action type)
"feedingDataIds": [
127544,127545,127546 // The triggered actions that are grouped under the attribute
]
}Response sample
{
"result": {
"id": 530,
"name": "room1",
"isDeleted": false,
"isImported": false,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"action_Type_ID": 18,
"tenantName": null,
"projectName": null,
"project_Domain_Name": null,
"project_Function_Name": null,
"feedingDatasIds": null
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}To create an content attribute, you can use the
FeedingContentAttribute/CreateAPI. (To learn more about the API, see Create a content attribute.)To create an interaction attribute, you can use the
FeedingInteractionAutomation/CreateAPI. (To learn more about the API, see Create an interaction attribute.)
Parameterize interaction channels
You must declare interaction channels for each (type of) interaction you want to set up between your solution and end-users.
Affective Computing by Virtue supports the following types of interaction channels:
Input channel
These channels enable EDAA™ to receive information to change the state of something.
Output channel
These channels enable EDAA™ to channel an action through something.
You can design your solution experience to enable users to simply touch an object (in VR) to trigger the action of changing a property (such as its color or material). In this case, you can configure the object as both an input and output channel.
For more information, see:
Example: Creating an input channel
The following example illustrates how to create an input interaction channel using the InteractionChannel/Create API:
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the project
"interaction_Channel_Types_Id": 1, // 1 => Input (Channel type)
"interaction_Input_Types_Id": 6, // 6 => Object (Input type)
"identifier": "wall_panels_ic",
"value": "wooden_wall_panels",
"active": true,
"interaction_Input_Category_Id": 472 // The triggered action attribute category that groups actions triggered when a user touches the object
}
Response sample
{
"result": {
"tenantId": 6,
"projectId": {{data_collection_projectID}},
"project_Domain_Id": null,
"project_Function_Id": null,
"interaction_Input_Types_Id": 6, // 6 => Object (Input type)
"tenantName": null,
"projectName": null,
"identifier": "wall_panels_ic",
"value": "wooden_wall_panels",
"active": true,
"interaction_Input_Category_Id": 472,
"interaction_Input_Category_Name": "wall_panels_options",
"triggered_Action_Name": null,
"triggered_Action_Id": null,
"isActive": true,
"destination_Entity_Name": null,
"destination_Entity_Object_Name": null,
"destination_Entity_Types_Id": null,
"destination_Entity_Types_Name": null,
"destination_Entity_Id": null,
"destination_Entity_Object_Id": null,
"interaction_Channel_Types_Id": 1,
"id": 2480
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example: Creating an output channel
The following example illustrates how to create an output interaction channel using the InteractionChannel/Create API:
API
POST{{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the project
"interaction_Channel_Types_Id": 2, // 2 => Output (Channel type)
"identifier": "wall_panels_oc",
"value": "pvc_slats",
"active": true,
"destination_Entity_Object_Id": 1234, // The object ID of the object that changes, i.e., in this case, the wall panel
"Triggered_Action_Attribute_Id": 530, // The triggered action attribute category that groups the required triggered actions
}Response sample
{
"result": {
"tenantId": 6,
"projectId": {{data_collection_projectID}},
"project_Domain_Id": null,
"project_Function_Id": null,
"interaction_Input_Types_Id": null,
"tenantName": null,
"projectName": null,
"identifier": "wall_panels_oc",
"value": "pvc_slats",
"active": true,
"interaction_Input_Category_Id": null,
"interaction_Input_Category_Name": null,
"triggered_Action_Name": null,
"triggered_Action_Id": null,
"isActive": true,
"destination_Entity_Name": null,
"destination_Entity_Object_Name": "Wall panel",
"destination_Entity_Types_Id": 2,
"destination_Entity_Types_Name": null,
"interaction_Channel_Types_Id": 2,
"id": 2481
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize logics
Logics define (or modify) your solution's behavior and enable you to personalize interactions.
EDAA™ can deliver an action only if a corresponding logic exists. Therefore, parameterizing logics is a critical step.
A logic has the following components:
Activator
The recipient of the logic based on psychological profile (except in the case of diagnostics logics).
Condition
All events that can trigger the logic.
Action
The resulting action that needs to be delivered by EDAA™. It can either be one specific action or any action grouped under an attribute
Operators
Logical operators (AND and OR) that define the flow of the logic and enable you to introduce multiple conditions and rules of interdependence between conditions and actions
For the RDC project, you must create the following logics:
Diagnostics
Used for the diagnostics process. It is mandatory to create them for the RDC project.
Activator
All new users
Condition
Whenever a new user is detected by EDAA™ (and is interacting with the solution for the first time)
Action
A set of questions to help EDAA™ establish the preliminary psychological profile of the user:
Five initial questions are pre-defined by EDAA™.
(Optional) You can create 3 additional questions to establish the user's persona type (to activate specific actions leveraging attribute name.)
For feedback questions
Defines how EDAA™, through NPCs, can interact with users to validate the proposed design.
Activator
All profiles
Condition
A design option was presented to the user.
Action
The attribute that contains the questions. Note: If you want to ask a specific question (for example, about a specific design option),you can set up the specific action instead of configuring an attribute from which actions are selected.
Personalizing object design
Defines how object designs (for example, color or material) are updated based on user preferences.
Activator
All profiles
Condition
The user interacts with an object.
Action
The attribute that contains alternate design options for the object as triggered actions.
For more information, see:
You must also:
Create and configure logics (related to the ones you create for your solution) in Unreal Engine (UE).
Connect UE to Orchestra (Affective Computing by Virtue's infrastructure) to enable data flow.
Ensure that the UE logics are converted into a format that Affective Computing can consume.
Example: Creating a logic blueprint
The following example describes how to create a logic blueprint using the logics/CreateLogic API:
API
POST {{Server_URL}}/api/services/app/logics/CreateLogic
To learn more about the API, see Create a logic blueprint.
Request sample
{
"logicName": "room1_logics",
"bluePrinting_logics_type_id": 2, // 2 => User calibration (logic type)
"projectId": {{data_collection_projectID}}, // The project ID of the project
"activator": 1, // 1 => Profile (Logic activator)
"activator_Type": 22, // 2 => All profiles (Logic activator type)
"anchored": true
}
Response sample
{
"result": {
"logicId": 48165
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating a logic condition
The following example describes how to create and configure the condition that triggers a logic using the logics/CreateLogicCondition API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicCondition
To learn more about the API, see Add a logic condition to a logic blueprint.
Request sample
{
"logicId": 48165,
"logicConditionList": [
{
"condition_id": 4, // 4 => Environmental (condition)
"condition_type_id": 47, // 47 => Object (condition type)
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2480 //The input of the wall_panels object
}
]
}, {
"condition_id": 4, // 4 => Environmental (condition)
"condition_type_id": 38, // 38 => External signal (condition type)
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2482 //Input based on question asked about wall panels
}
]
}
}
Response sample
{
"result": {
"condition_Ids": [
53036, //The condition ID for object interaction
53037//The condition ID for feedback
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating a logic action
The following example describes how to configure the action delivered when a logic is triggered using the logics/CreateLogicAction API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicAction
To learn more about the API, see Add a logic action to a logic blueprint.
Request sample
{
"logicId": 48165,
"logicActionList": [
{
"execution_order": 0,
"feeding_content_Direction_Id": 3, // 3 => Output as interaction channel
"action_type_Id": 4, // 4 => Triggered action
"interaction_Channels_Id": 2481, // The channel for replacing the object
"anchored": true
}
]
}Response sample
{
"result": {
"actionIds": [
22903
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Mapping a logic action with a logic condition
The following example describes how to map a logic action with a logic condition using the logics/CreateLogicActionMapping API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicActionMapping
To learn more about the API, see Map conditions to actions for a logic blueprint.
Request sample
{
"logicId": 48165, // The logic
"conditionActionList": [
{
"conditionId": 53036, // The condition of the user selecting an object
"actionId": 22903, // The interaction channel that contains triggered actions of possible design options for the object
"logical_Operator": "And" // As one condition is linked to one action, this doesn't have any effect
}
]
}
Response sample
{
"result": {
"conditionActionMappingId": [
17375
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Step 3: Run the experience to collect initial data
Declaring end-users
Solutions powered by Affective Computing can have the following kinds of users based on function:
Internal users: Administrative or managerial users who might be responsible for tasks such as feeding actions and managing logic blueprints. Typically, they wouldn't interact with the solution directly as end-users, and can therefore be considered as internal users.
External users: Users who interact directly with the solution as end-users and participate in interactions. These users can be considered as external users. External users could be the same or a completely different set of users from internal users.
For more information, see Creating and managing end users of the solution.
When running the solution, you must declare your end-users (external users). Doing this generates their user ID, which is a unique ID that enables identifying them, tracking their activity, and monitoring the impact of the solution experience on them.
You can declare end-users using the ExternalUser/Create API. To learn more about the API, see Create a user ID for a new user.
Running interactions
From Affective Computing (powered by Virtue)'s point of view, running an experience simply means enabling the interactions between the solution (in this case, the design validation experience in VR) and users to execute (run).
To enable running your solution experience, for each type of interaction it includes, you must do the following:
Initialize interaction sessions using the
Interaction/InitializeSessionAPI.See the following example:
API
POST{{Server_URL}}/api/app/v2/Interaction/InitializeSessionTo learn more about the API, see Initialize an interaction session.
Request sample
{ "externalUserID":"50c91e36-3f69-4245-ad20-53db39d780c9", // unique identifier of the user "projectID": 493, // The project ID of the project "foreign_identity": "Determining whether the user likes the light fixture", "language": "en", "client_ip": "10.1.192.128" }Response sample
{ "result": { "interaction_Session": 1269064, "isValidated": false, "force_user_calibration": false }, "targetUrl": null, "success": true, "error": null, "unAuthorizedRequest": false, "__abp": true }Interact using the
Interaction/InteractAPI. See the following example:API
POST{{Server_URL}}/api/app/v2/Interaction/InteractTo learn more about the API, see Perform an interaction.
Request sample
{ "interaction_Session": 35006, // session ID of the initialized interaction session "beats_Per_Minute":75, "time_Taken_To_Response": "1", "interact_type": "external_signal", "interact_value": "light_fixture_ic", // Interaction channel that provides the input as an external signal "mode": "action" }Response sample
{ "result": { "sound": null, "statement": "", "question": "", "content": [], "music": "", "action": null, "interaction_channel_id": 530, // Interaction channel ID of the input channel configured as the interact value "triggered_action": "light_fixture_ta", // Triggered action delivered as part of the interaction "last_stage": false, "repeat_stage": false, "audio_speed": "medium", "change_mode": null, "status": "success", "errorcode": "", "errorMessage": "" }, "targetUrl": null, "success": true, "error": null, "unAuthorizedRequest": false, "__abp": true }End interaction sessions using the
Interaction/end_interactionAPI. See the following example:API
POST{{Server_URL}}/api/app/v2/Interaction/end_interactionTo learn more about the API, see End an interaction session.
Request sample
{ "interaction_Session": 35006 }Response sample
{ "result": true, "targetUrl": null, "success": true, "error": null, "unAuthorizedRequest": false, "__abp": true }
When running an experience (by running interactions), you must do the following:
End-users must first complete the diagnostics process. To take them through the diagnostics process, you can use the
InteractionAPI. (For more information, see Running interactions.)After the diagnostics, end-users can start the experience by observing design options in VR. To perform this step, you can use the same
InteractionAPI in user calibration mode:Perform interactions with Affective Computing (powered by Virtue), providing calibration data:
For voice interactions, you must convert audio to base64 and submit it through the respective interaction channel (specify it as the
interact_value).When an object is changed in an interaction (i.e. an end-user dislikes an option and therefore the designer changes it), submit the object ID and the corresponding triggered action ID that reflects the new value.
During the whole process, Affective Computing (powered by Virtue) collects user responses and heart rate information so that the data can be augmented during the simulation phase (with VHs who are clones of the end-user).
Invoking the endpoints of the Interaction API produces the necessary data and defines the stages for simulating the interactions in the next phase. Doing this provides Affective Computing (powered by Virtue) with the necessary information to simulate the interactions of the VHs.
Step 4: Set up the simulation project
Similar to Step 1: Create the RDC project, you can create a project for simulation.
You can also clone (duplicate) the RDC project and link the two. Cloning the project will ensure that your parameterization is consistent, especially if you don't need to configure different inputs in the two projects.
Example: Creating a simulation project that relates to an RDC project
The following example illustrates how to create a simulation project using the Project/Create API:
API
POST{{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project.
Request sample
{
"projectName": "Simulation - SIM",
"project_Domain_Id": 4, // 4 -> Digital user experience
"project_Function_Id": 12, // 13 -> Neuro-architecture
"duplicate_Project_Id": 742, // The project ID of the Data_Collection project
"RelatesTo_Project_Id": 742, // The project ID of the Data_Collection project
"project_Purpose": "Simulation", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser": false,
"productCalibrationStatus": false, // Deactivates product calibration mode for the simulation.
"interactionSetups":[
{
"interaction_setup_id":2,
"interaction_mode":"diagnostics"
},
{
"interaction_setup_id":3,
"interaction_mode":"diagnostics"
},
{
"interaction_setup_id":2,
"interaction_mode":"action"
},
{
"interaction_setup_id":3,
"interaction_mode":"action"
},
{
"interaction_setup_id":27,
"interaction_mode":"action"
},
{
"interaction_setup_id":2,
"interaction_mode":"user_calibration"
},
{
"interaction_setup_id":3,
"interaction_mode":"user_calibration"
},
{
"interaction_setup_id":27,
"interaction_mode":"user_calibration"
}
]
}Response sample
{
"result": {
"duplication_Summary": {
"feedingDataMapping": {
"127543": 127553,
"127544": 127554,
"127545": 127555,
"127546": 127556,
"127547": 127557
},
"contentAttributesMapping": {},
"interactionAttributesMapping": {},
"destinationEntity_Mapping": {
"14866": 14869
},
"interactionChannelsMapping": {
"2480": 2488,
"2481": 2489,
"2482": 2490,
"2483": 2491
},
"interactionChannelsInputCategoryMapping": {
"472": 474
},
"logicsMapping": {
"48165": 48212
},
"logicActionsMapping": {
"22903": 22940,
"22904": 22941
},
"logicConditionsMapping": {
"53036": 53085,
"53037": 53086
},
"triggeredActionAttributesMapping": {
"530": 534,
"531": 535
},
"destinationEntityEnvironmentObjects_Mapping": {
"663": 665
}
},
"projectName": "Simulation-SIM",
"duplicated_Project_Id": 742,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 12,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": false,
"isRecurringUser": false,
"project_Purpose": "Simulation",
"relatesTo_Project_Id": 742,
"project_Status": "draft",
"id": 745
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterizing the simulation project
Data inputs
For the simulation, the data recipients are Virtual Humans (VHs) instead of real human users.
No physiological data inputs are required for the simulation phase, as the physiological responses of the VHs are managed by Affective Computing (powered by Virtue) during simulation. You can configure the same user motion data inputs as you did for the RDC project as VHs will also provide responses indicating their preferences during the simulated experience in the same manner.
Entities
Environment: Whether or not you need to create an environment for simulation depends on how your solution is designed. If you want to use a different environment for the simulation phase of the solution, you can create it for the simulation project. Virtual Humans: For the simulation phase, you must declare as many VHs as the number of end-users participating in the experience as the VHs are clones of the real human users.
Objects
You must declare additional environmental objects based on the requirements of your solution. Note: If you have cloned the RDC project, the objects declared for the RDC are also applicable in the simulation.
Actions and attributes
For all new objects created for simulation, you must create actions that correspond to design variations and also group them under appropriate (new) attributes. Similarly, if you want to ask any additional questions to the VHs participating in the simulated experience (other than the ones already declared for the RDC project, assuming that you have cloned that project to create the simulation), you must create the corresponding actions and attributes.
Interaction channels
Similar to actions and attributes, you must also create the corresponding interaction channels.
Logics
Similar to actions, attributes, and interaction channels, you must create any logics you require for the simulation phase of the solution. Note: You do not require diagnostics logics for the simulation.
Step 5: Generate data for simulation
To generate the required data to run the simulation, you must do the following:
Generate raw data
Completing this task is the same as running the experience using the RDC project.
When the InitializeSession, Interact, and end_interaction endpoints of the Interaction API are called for each interaction between your solution and end-users, the data, actions, and stages (successful interactions) are produced.
Retrieving stages
The Interaction API can return the list of stages with their interaction types, such as voice, text, QR, or external signal.
It includes parameters such as interact_value , beats_Per_Minute (heart rate), and more, that are required for the Simulation.
To learn more about the APIs, see:
Augment raw data for simulation
Data augmentation is the process of creating variations of the data collected during RDC for the purpose of simulation.
You can use the endpoint to begin data augmentation for the (cloned) simulation.
To learn more about the API, see
The following example illustrates how to begin data augmentation for bulk simulation using the Interaction/StartDataAugumentation endpoint:
API
POST{{Server_URL}}/api/services/app/v2/Interaction/StartDataAugmentation
To learn more about the API, see Begin data augmentation for clone simulations.
Request sample
{
"session_ID": 31491, // Interaction session ID (stage) from the data collection to be augmented
"simulation_Project_ID": 146 // The project ID of the data collection project
}
Response sample
{
"result": {
"augmentation_Patch_ID": 31491,
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}After collecting raw data and augmenting it, if you add additional actions to your project, you must perform both these tasks again; you must perform the corresponding interactions (that deliver the new actions to users) and repeat data augmentation.
Step 6: Run the simulation
Unlike RDC, in which a few real human users perform the interactions to set up a baseline, in the simulation, the users are Virtual Humans (VHs), who are EDAA™-powered emotionally-driven NPCs cloned from the profiles of the real users.
When running a simulation, you must do the following:
Summon the VHs
At the beginning of running the simulation, you must summon your VH entities (bring them to life) so that they can start interacting with the solution.
You can do this using the SummonNPC endpoint in the Interaction API.
The API creates a virtual twin of the specified real human user who was involved in the raw data collection process.
This API can accept multiple entries to bulk-summon the VHs for the simulation project.
Ensure that you summon VHs before starting the simulation.
Example: Summoning a VH
The following example illustrates how to summon a VH cloned from the psychological profile of an existing user:
API
POST{{Server_URL}}/api/v2/Interaction/SummonNPC
To learn more about the API, see Summon a VH.
Request sample
{
"entity_ID": 342543, // Entity ID of the VH cloned from a real end-user
"user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile must power the VH
"projectId": 34245
}Response sample
{
"id": 123, // VH ID of the summoned VH
"projectId": 34245,
"user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile powers the VH
"entity_ID": 342543,
"active": true,
"lastActiveTime": "2024-11-23T15:24:54.465Z"
"creationTime": "2024-11-23T15:24:54.465Z"
"endTime": null
}You must store the retrieved VH IDs so that you can kill (terminate) the VHs at the end of the simulation.
Validate availability of VHs
VHs can have the three possible statuses; Offline , Preparing , Online.
Your summoned VHs are only ready for use in interactions when they are in the Online status. Otherwise, if you try to perform an interaction using them, an error will occur as their instance isn't online yet.
You can use the SummonedEntityStatus endpoint of the Interaction API to view the availability statuses of summoned VHs. To learn more about the API, see View the status of all summoned VHs.
Simulate interactions
Running a simulation simply means running a script in Unreal Engine that sequentially executes the different interactions that, in sum, make up the simulated experience. Do the following:
List and loop all stages retrieved when generating data for the simulation (during Step 5: Generate data for simulation).
For each stage, replace the parameters with those from the simulation and initialize, run, and end the interactions using the
InteractionAPI. (For more information, see Running interactions.) This step represents actually running the simulation and enabling the VHs to interact with the objects. Note: TheInteractionAPI sends the initial state of all objects to Affective Computing (powered by Virtue) by providing the object ID and the triggered action ID.
Best practice
Affective Computing (powered by Virtue) enables you to resume a simulation at any point if it is interrupted manually or due to any other reason. However, to continue the simulation with the same situational context as the point of interruption, details such as the session IDs, entity IDs, and the number of perceptions are required.
Therefore, as a best practice, when running simulations, we recommend saving this information at your end.
Terminate the VHs
After completing the simulation (looping through all the experience data), you must terminate the VHs.
You can do this using the KillNPC endpoint in the Interaction API.
You can use this API to bulk-terminate multiple VHs.
Example: Terminating a VH
The following example illustrates how to terminate a VH:
API
POST{{Server_URL}}/api/v2/Interaction/KillNPC
To learn more about the API, see Terminate the VHs.
Request sample
{
"npc_ID": 123, // Entity ID of summoned VH
}Response sample
{
"id": 123, // VH ID of the summoned VH
"projectId": 34245,
"user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile powers the VH
"entity_ID": 342543,
"active": false,
"lastActiveTime": "2024-11-28T12:32:57.465Z"
"creationTime": "2024-11-23T15:24:54.465Z"
"endTime": "2024-11-28T12:32:57.465Z"
}Step 7: Analyze the results
You can analyze the level of engagement in each session to validate your solution's performance. You can do this using the Reporting/GetEngagementBySession API. See the following example:
API
GET{{Server_URL}}/api/services/app/Reporting/GetEngagementBySession
To learn more about the API, see Viewing insights from project data.
Request sample
{
"sessionId": 238944,
"projectID": {{projectID}}, // The project ID of the project
"foreign_identity": "User's response to boho design suite",
"language": "en",
"client_ip": "10.1.192.128"
}
Response sample
{
"result": [
{
"sessionId": 238944,
"engagement": 0.0,
"stage": 0,
"action": "",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 1,
"action": "Do you like this design?",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 1.0,
"stage": 4,
"action": "Would you like to replace the light fixture with a rattan chandelier",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 5,
"action": "Are you fond of the beach cafe design aesthetic.",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 6,
"action": "Let me show you how this would look in wood.",
"entityId": 16812,
"object_status": []
}
],
"targetUrl": null,
"success": true,Additionally, as the solution is built in Unreal Engine, you can define where and how the simulation results need to be consumed. You can even use an external analytics or reporting tool to visualize and consume the results.
Last updated
Was this helpful?