Tutorial: Implementing a Bulk Simulation solution
A step-by-step guide to building a solution that validates an human experience using bulk simulation
This tutorial explores an example Affective Computing (powered by Virtue) solution that simulates real human experiences, particularly focusing on interactions with various objects and reactions of Virtual Humans (VHs( in a 3D environment.
During the simulation, VHs, who are powered by psychological profiles augmented from real humans, participate in an experiential scenario, which is a virtual recreation of a real experience. This enables you to gain insights about how multiple variables could affect humans psychologically and validate whether the experience's design is viable.
Business case
When designing frequently-accessed public buildings such as hospitals and banks, real estate developers, architects, and construction contractors must factor in effective emergency evacuation plans to protect public health and safety.
Challenges
An emergency evacuation plan must be validated very early in the overall process of real estate development. This is because it would be both difficult and expensive to integrate changes into the building's design after construction is completed.
Designing and implementing a validation solution
Utilizing the Affective Computing (powered by Virtue) framework and its underlying technology, EDAA™, to validate your solution for this business case is an excellent choice.
You can leverage the simulation capabilities of our platform to to simulate various scenarios with emotionally-driven non-playable characters (VHs) and streamline and validate your evacuation plan before implementing it in the real world.
Simulation enables you to recreate and simulate multiple specific situations that, in sum, form the experience.
Overview
A solution powered by Affective Computing (powered by Virtue) and based on Bulk Simulation that aims at validating an experience typically consists of the following phases:
Diagnostics and calibration
Diagnostics is the process in which Affective Computing (powered by Virtue) analyzes and establishes a preliminary (baseline) psychological profile for each user, using the powerful capabilities of EDAA™, its underlying technology.
Calibration is the process of validating and adjusting the previously-established psychological profile of each existing user based on their current psychological state.
For detailed information, see Diagnostics and calibration.
Both these processes work hand in hand with data collection, which is the first phase of implementing the solution. Data collection provides your solution with baselines, context, and boundaries.
To collect this initial data, you can either run a real experience or input synthetic data from an external data set.
However, as the core purpose of this solution is to validate a critical component in real-estate development that could potentially impact human safety and well-being (which can only be achieved satisfactorily if the initial data is obtained directly from real humans), we recommend running the event of raw data collection for this use case.
Raw data collection (RDC)
Before simulating an experience, Affective Computing (powered by Virtue) must be made aware of the boundaries of each situation. In addition to better solution performance, doing this prevents AI hallucinations.
The event of raw data collection (RDC) can help you achieve this. RDC simply means running the experience provided by your solution using real human users.
This process introduces the reality of the experience and provides context when similar situations (as the ones faced by real human users) are simulated using Virtual Humans (VHs).
To learn more about RDC, see Importance of raw data collection for simulation.
Simulation
Simulation enables you to replicate the situations that real human users face and observe its effects over a custom period of time.
Simulation means running an experience in a virtual environment using VHs, whose psychological profiles are cloned or augmented from that of the real human users.
Clone simulation
A type of experience simulation in which each VH has a psychological profile that matches that of a specific real human end-user.
Bulk simulation
A type of experience simulation in which VHs are not 1:1 clones of real human end-users, but instead based on augmented data from real users’ psychological profiles. This tutorial aims to help you understand how to implement bulk simulation.
For more information, see Simulation.
Result analysis
After successfully implementing the solution, you can analyze user engagement levels in each interaction session to gain data insights and visibility into the solution's performance and validate the feasibility of the evacuation plan designed.
Additionally, depending on the specifics of your solution's requirements, design, and definition of success, you can integrate external data analytics tools to visualize the results, generate reports, and gain the specific insights you require.
Implementing the solution
The process of implementing the solution consists of the following steps:
Working with our APIs
To access and work with Affective Computing (powered by Virtue) using a low-code integration with our APIs, you must have an access token, to obtain which you must first obtain an API key.
You can obtain an API key from the Portal or the API (ApiKeys/GenerateApiKey API) itself. For more information, see Authenticating your data source to use our APIs.
To separate data collection and simulation, you must create two projects; one for each purpose, respectively. You can clone and link the two projects to keep their parameterization uniform. The simulation project will use the data collected during RDC, as both the projects are linked.
Step 1: Create the RDC project
You can create an Affective Computing (powered by Virtue) project for raw data collection (RDC) by following the procedure described in How to set up a solution. You can use this project to run multiple RDC sessions.
Example: Creating a project
The following example illustrates how to create a project for the purpose of RDC using the Project/Create API:
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project.
Request sample
{
"projectName": "Data Collection - RDC",
"project_Domain_Id": 4, // 4 -> Digital user experience
"project_Function_Id": 12, // 12 -> Virtual soul recreation
"project_Purpose": "Data_Collection", //The value for this parameter can be Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser": false,
"productCalibrationStatus": true, // Runs the RDC project in product calibration mode.
"interactionSetups": [
{
"interaction_setup_id": 5,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3, // Configuring the interaction setup for Speech
"interaction_mode": "user_calibration"
},{
"interaction_setup_id": 5, // Configuring the interaction setup for EEG
"interaction_mode": "user_calibration"
},{
"interaction_setup_id": 8,// Configuring the interaction setup for Time to answer
"interaction_mode": "user_calibration"
}
]
}Response sample
{
"result": {
"projectName": "Data Collection - RDC",
"duplicated_Project_Id": null,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 12,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": true,
"isRecurringUser": false,
"project_Purpose": "Data_Collection",
"relatesTo_Project_Id": null,
"project_Status": "draft",
"id": 1234
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Step 2: Parameterize the RDC project
After creating the RDC project, you must parameterize it to customize the behavior of Affective Computing (powered by Virtue), specific to your use case, when users interact with your solution.
For detailed information about parameterization, see Parameterizing a project.
In this step, you can do the following:
Parameterize data inputs
You must declare (define) data inputs for each type of (raw or synthetic) external data you want to utilize in your solution. You can do this at the same time as creating the project.
For this solution, you can configure the following data inputs:
For the RDC project, you must configure data inputs as follows:
Real human user
Environmental
Heat
Smoke
Objects
Note: Real human users would experience the effects of heat and smoke (as a result of a fire emergency for which the evacuation plan is being validated).
Objects in the environment would affect the decision-making of users regarding which path to take. All of these inputs must be provided to the solution as environmental inputs.
Physiological
EEG, through a wearable headset
Speech, through a mictophone
User motion
Declaration of rooms or sections within the building
Conversation initiation or trigger and answers (Time to answer, number of words, pauses between words, duration of the answer, meaning, and more)
For more information, see Understanding data inputs.
Example: Configuring data inputs
The following example illustrates how to declare data inputs using the interactionsSetups parameter of the Project/Create API.
This parameter enables you to declare inputs and their respective operational modes.
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project. Also see Viewing available data inputs (interaction setups).
Request sample
{
"projectName": "Data Collection - RDC",
"project_Domain_Id": 4, // 4 -> Digital user experience
"project_Function_Id": 12, // 12 -> Virtual soul recreation
"project_Purpose": "Data_Collection", //The value for this parameter can be Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser": false,
"productCalibrationStatus": true, // Runs the RDC project in product calibration mode.
"interactionSetups": [
{
"interaction_setup_id": 5,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3,
"interaction_mode": "diagnostics"
},{
"interaction_setup_id": 3, // Configuring the interaction setup for Speech
"interaction_mode": "user_calibration"
},{
"interaction_setup_id": 5, // Configuring the interaction setup for EEG
"interaction_mode": "user_calibration"
},{
"interaction_setup_id": 8,// Configuring the interaction setup for Time to answer
"interaction_mode": "user_calibration"
}
]
}Response sample
{
"result": {
"projectName": "Data Collection - RDC",
"duplicated_Project_Id": null,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 12,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": true,
"isRecurringUser": false,
"project_Purpose": "Data_Collection",
"relatesTo_Project_Id": null,
"project_Status": "draft",
"id": 1234
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize entities and objects
Affective Computing (powered by Virtue) supports two types of entities:
Environment: The 3D environment where the real or virtual human can users interact with the solution.
Virtual Humans (VHs): Emotionally-driven NPCs (powered by Affective Computing) that interact with the solution during simulation.
In this step, the environment entity represents the environment in which users would participate in the RDC event.
For this solution, you require only one environment for both RDC and simulation; real humans can undergo the experience first (during RDC) to establish the context and later, the simulation can also take place in the same environment.
For example, during RDC, when a real human user enters a section of the building, Affective Computing (powered by Virtue) is notified about this event along with the environmental information that the person perceives. Later, during simulation, the same information is used.
Objects are the items in the environment with which your end users can interact. They can serve as a reference for data visualization or as channels to reflect status updates.
Tips for creating objects
As projects cannot have multiple environment entities, you can introduce the separation of objects by area by including the area name in the object name. For example, you can create an object called room1_wall_concrete, which is a concrete wall that isn't a source of any event but can receive the impact of environmental physics. Static objects like this can reflect how the perception of VHs would be impacted in that particular room. For static objects, no dynamic changes in the objects would be expected during the course of the experience.
If required, you can plan object naming in a way that the material that objects are made of are embedded in the object names.
The names of objects should also reflect their utility, as you cannot create multiple triggered actions for a single object. For example, you can create an object called room2_speaker_sound to indicate the source of the emergency alarm.
You can also create an perceived input by creating an environmental object. For example, when validating evacuation paths for a fire emergency, you could create objects called perceived_heat (that has the temperature as its value) and perceived_smoke (the visibility distance as its value). During the course of the experience, for example, at the time of sounding the fire alarm, when a user enters a specific area inside the building, when overcrowding is detected in an area, or at any other significant time, you can pass the value of these objects to Affective Computing.
As the environment is the same for both the RDC and simulation phases, environmental objects can apply to both RDC and simulation.
For more information, see:
Example: Creating an entity
The following example illustrates how to create an environment entity using the DestinationEntity/CreateEntities API:
API
POST {{Server_URL}}/api/services/app/DestinationEntity/CreateEntities
To learn more about the API, see Create an entity.
Request sample
{
"destination_Entity_Types_Id": 2, // 2 => 3D Environment
"projectId": {{data_collection_projectID}}, // The project ID of the Data_Collection (RDC) project
"entityList":
[
{
"name": "RealEstateOffice1",
"description": "Real estate office prototype - environment to be used for data collection”,
"identifier": "ENV_REO1"
}
]
}Response sample
{
"result": [
139979
],
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Creating an object
The following example illustrates how to create an object using the DestinationObject/CreateObjects API:
API
POST{{Server_URL}}/api/services/app/DestinationObject/CreateObjects
To learn more about the API, see Create an object.
Request sample
{
"destination_Entity_Environment_Id": 139979, // The environment in which the object must be created
"destinationObjectList": [
{
"name": "Room 2 speaker (sound)",
"description": "The speaker that sounds the alarm in room 2",
"identifier": "room2_speaker_sound"
}]
}Response sample
{
"result": [
663
],
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize actions and attributes
Affective Computing (powered by Virtue) supports three categories of actions:
Content
Delivering any media, such as images, video, or sound.
Interactions
Delivering statements or asking questions
Triggered actions
Delivering an action as a response to specific events or conditions.
Attributes personalize interactions between your solution and end-users by shaping and classifying actions. They can be considered as "folders" that group related or similar actions.
For example, an attribute called crowd can contain the actions that must be delivered in case of no crowd, moderate crowd, or overcrowding.
For the RDC project for fire emergency evacuation paths validation, you can create the following attributes and actions, for example:
crowd
Triggered action
no_crowd
low_crowd
high_crowd
obstacles_room1
Triggered action
no_obstacle
exit_blocked
feedback_questions
Interaction
question1
question2
questionN
Tips for creating actions and attributes
As your solution relies on Affective Computing (powered by Virtue) to trigger questions to the participants, it is crucial to design the questions based on what you are trying to validate. For example, if you want to understand whether the user can correctly identify evacuation routes, you must design the questions accordingly. Questions like “What do you feel about this obstacle?” are not recommended because if you want to understand whether the user can find an exit even though an obstacle is in the room, the underlying answer to that question does not convey the human’s response to that situation. You can declare questions as interaction actions and group them under appropriate interaction attributes (for example, room1_questions).
We also recommend thoughtfully planning who would ask the questions. As humans wouldn't typically encounter questions from an unseen voice, you could introduce an object in your project that is implemented as an NPC in Unreal Engine (during the experience, an NPC would approach a user when an obstacle is blocking the path and ask them where they could go).
For more information, see:
Example 1: Creating an action
The following example illustrates how to create a triggered action using the FeedingData/Create API. The action in this example is triggered when a user enters a room:
API
POST {{Server_URL}}/api/services/app/v2/FeedingData/Create
To learn more about the API, see Add an action.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"identity": "user_enters_room1",
"feeding_Value": "user_enters_room1",
"feeding_Action_Category_ID": 3, // 3 => Triggered action (The action category)
"feeding_Action_Type_ID": 18, // 18 => Value (The action type)
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true
}Response sample
{
"result": {
"projectId": 742,
"identity": "user_enters_room1",
"feeding_Value": "user_enters_room1",
"feeding_Action_Type_ID": 18,
"feeding_Action_Category_ID": 3,
"isImported": false,
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true,
"id": 127544
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example 2: Creating an action
The following example illustrates how to create a triggered action using the FeedingData/Create API. The action in this example is triggered during overcrowding:
API
POST {{Server_URL}}/api/services/app/v2/FeedingData/Create
To learn more about the API, see Add an action.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"identity": "room1_high_crowd_find_another_exit",
"feeding_Value": "room1_high_crowd_find_another_exit",
"feeding_Action_Category_ID": 3, // 3 => Triggered action (The action category)
"feeding_Action_Type_ID": 18, // 18 => Value (The action type)
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true
}Response sample
{
"result": {
"projectId": 742,
"identity": "room1_high_crowd_find_another_exit",
"feeding_Value": "room1_high_crowd_find_another_exit",
"feeding_Action_Type_ID": 18,
"feeding_Action_Category_ID": 3,
"isImported": false,
"isCopyRighted": true,
"isDiagnostics": false,
"isVerified": true,
"id": 127544
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example 3: Creating an attribute
The following example illustrates how to create a triggered action attribute (to group triggered actions) using the FeedingTriggeredActionAttribute/Create API. In this example, the attribute groups the actions triggered when a user enters a room.
API
POST {{Server_URL}}/api/services/app/FeedingTriggeredActionAttribute/Create
To learn more about the API, see Create a triggered action attribute.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"name": "paths_attribute_user_enters_room1",
"action_Type_ID": 18, // 18 => Value (The action type)
"feedingDataIds": [
127544,127545,127546 // The triggered actions that are grouped under the attribute
]
}Response sample
{
"result": {
"id": 530,
"name": "paths_attribute_user_enters_room1",
"isDeleted": false,
"isImported": false,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"action_Type_ID": 18,
"tenantName": null,
"projectName": null,
"project_Domain_Name": null,
"project_Function_Name": null,
"feedingDatasIds": null
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}To create an content attribute, you can use the
FeedingContentAttribute/CreateAPI. (To learn more about the API, see Create a content attribute.)To create an interaction attribute, you can use the
FeedingInteractionAutomation/CreateAPI. (To learn more about the API, see Create an interaction attribute.)
Example 4: Creating an attribute
The following example illustrates how to create a triggered action attribute (to group triggered actions) using the FeedingTriggeredActionAttribute/Create API. In this example, the attribute groups the actions triggered during overcrowding:
API
POST {{Server_URL}}/api/services/app/FeedingTriggeredActionAttribute/Create
To learn more about the API, see Create a triggered action attribute.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"name": "paths_attribute_room1_high_crowd",
"action_Type_ID": 18, // 18 => Value (The action type)
"feedingDataIds": [
127234,127678,127654 // The triggered actions that are grouped under the attribute
]
}Response sample
{
"result": {
"id": 575,
"name": "paths_attribute_room1_high_crowd",
"isDeleted": false,
"isImported": false,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"action_Type_ID": 18,
"tenantName": null,
"projectName": null,
"project_Domain_Name": null,
"project_Function_Name": null,
"feedingDatasIds": null
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize interaction channels
You must declare interaction channels for each (type of) interaction you want to set up between your solution and end-users.
Affective Computing (powered by Virtue) two types of interaction channels:
Input channel
These channels enable EDAA™ (the underlying technology that powers Affective Computing (powered by Virtue)) to receive information to change the state of something. For example, for RDC, if you create an input interaction channel called crowd that is conditioned by the crowd attribute (which contains triggered actions that correspond to different crowd levels), you can record how users react when encountering no crowd, low crowd, or overcrowding.
Output channel
These channels enable EDAA™ to channel an action through something. For example, suppose you want an object in the environment to burn during the course of the experience (a fire emergency), you can declare it as the output channel.
For the RDC project, you must declare the event of overcrowding and the alternate path taken as input channels (and condition them with respective attributes).
For more information, see:
Example: Creating an input channel
The following example illustrates how to create an input interaction channel using the InteractionChannel/Create API:
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
To learn more about the API, see Create an interaction channel.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"interaction_Channel_Types_Id": 1, // 1 => Input (Channel type)
"interaction_Input_Types_Id": 5, // 5 => External signal (Input type)
"identifier": "room1_ic",
"value": "room1_ic",
"active": true,
"interaction_Input_Category_Id": 472 // The triggered action attribute category that groups actions triggered when a user enters a room
}
Response sample
{
"result": {
"tenantId": 6,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"interaction_Input_Types_Id": null,
"tenantName": null,
"projectName": null,
"identifier": "room1_ic",
"value": null,
"active": false,
"interaction_Input_Category_Id": null,
"interaction_Input_Category_Name": null,
"triggered_Action_Name": null,
"triggered_Action_Id": null,
"isActive": true,
"destination_Entity_Name": null,
"destination_Entity_Object_Name": null,
"destination_Entity_Types_Id": null,
"destination_Entity_Types_Name": null,
"destination_Entity_Id": null,
"destination_Entity_Object_Id": null,
"interaction_Channel_Types_Id": 1,
"id": 2480
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example: Creating an output channel
The following example illustrates how to create an output interaction channel using the InteractionChannel/Create API:
API
POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create
TTo learn more about the API, see Create an interaction channel.
Request sample
{
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"interaction_Channel_Types_Id": 2, // 2 => Output (Channel type)
"identifier": "room1_io",
"value": "room1_io",
"active": true,
"destination_Entity_Id": 14866,// The 3D environment entity
"Triggered_Action_Attribute_Id": 530, // The triggered action attribute category that groups the required triggered actions
"destination_Entity_Object_Id": 663 // The object that must catch fire
}Response sample
{
"result": {
"tenantId": 6,
"projectId": 742,
"project_Domain_Id": null,
"project_Function_Id": null,
"interaction_Input_Types_Id": null,
"tenantName": null,
"projectName": null,
"identifier": "room1_io",
"value": null,
"active": false,
"interaction_Input_Category_Id": null,
"interaction_Input_Category_Name": null,
"triggered_Action_Name": null,
"triggered_Action_Id": null,
"isActive": true,
"destination_Entity_Name": null,
"destination_Entity_Object_Name": null,
"destination_Entity_Types_Id": 2,
"destination_Entity_Types_Name": null,
"destination_Entity_Id": 14866,
"destination_Entity_Object_Id": 663,
"interaction_Channel_Types_Id": 2,
"id": 2481
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterize logics
Logics define (or modify) your solution's behavior and enable you to personalize interactions.
EDAA™ can deliver an action only if a corresponding logic exists. Therefore, parameterizing logics is a critical step
A logic has the following components:
Activator
The recipient of the logic based on psychological profile (except in the case of diagnostics logics).
Condition
All events that can trigger the logic.
Action
The resulting action that needs to be delivered by EDAA™. It can either be one specific action or any action grouped under an attribute
Operators
Logical operators (AND and OR) that define the flow of the logic and enable you to introduce multiple conditions and rules of interdependence between conditions and actions
For the RDC project, you must create the following logics:
Diagnostics
Used for the diagnostics process. It is mandatory to create this for the RDC project.
Activator
All new users
Condition
Whenever a new user is detected by EDAA™ (and is interacting with the solution for the first time)
Action
A set of questions to help EDAA™ establish the preliminary psychological profile of the user:
Five initial questions are pre-defined by EDAA™.
(Optional) You can create 3 additional questions to establish the user's persona type (to activate specific actions leveraging attribute names or spawn EDNPCs based on specific user profiles.)
For feedback questions
Defines how EDAA™, through an NPC, can interact with users to validate the evacuation plan.
Activator
All profiles
Condition
An obstacle exists in a room
Action
The attribute that contains the question Note: You can do this in all cases except if you want to ask a specific question (for example, when the user is located in a specific section of the building). In this case, you can select the specific interaction.
For more information, see:
You also need to create and configure logics (related to the ones you create for your solution) in Unreal Engine (UE).
This step established the connection between UE and Affective Computing (powered by Virtue) and ensures that the logics defined in UE are converted into the format of logics that Affective Computing can consume.
Example: Creating a logic blueprint
The following example describes how to create a logic blueprint using the logics/CreateLogic API:
API
POST {{Server_URL}}/api/services/app/logics/CreateLogic
To learn more about the API, see Create a logic blueprint.
Request sample
{
"logicName": "Movement_room1",
"bluePrinting_logics_type_id": 2, // 2 => User calibration (logic type)
"projectId": {{data_collection_projectID}}, // The project ID of the Data_collection project
"activator": 1, // 1 => Profile (Logic activator)
"activator_Type": 22, // 2 => All profiles (Logic activator type)
"anchored": true
}
Response sample
{
"result": {
"logicId": 48165
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example: Creating a logic condition
The following example describes how to create and configure the condition that triggers a logic using the logics/CreateLogicCondition API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicCondition
To learn more about the API, see Add a logic condition to a logic blueprint.
Request sample
{
"logicId": 48165,
"logicConditionList": [
{
"condition_id": 4,//Environmental
"condition_type_id": 38,//External signal
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2480 //The input of room 1 external signal
}
]
}, {
"condition_id": 4,//Environmental
"condition_type_id": 38,//External signal
"logical_Operator" : "Or",
"logicConditionValuesList": [
{
"interaction_Input_Id": 2482 //The input of room1_crowd external signal
}
]
}
Response sample
{
"result": {
"condition_Ids": [
53036, //The condition ID of entering room 1
53037 //The condition ID of overcrowding in room 1
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Example: Creating a logic action
The following example describes how to configure the action delivered when a logic is triggered using the logics/CreateLogicAction API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicAction
To learn more about the API, see Add a logic action to a logic blueprint.
Request sample
{
"logicId": 48165,
"logicActionList": [
{
"execution_order": 0,
"feeding_content_Direction_Id": 3, // 3 => Output as interaction channel
"action_type_Id": 4, // 4 => Triggered action
"interaction_Channels_Id": 2481, // The channel for movement into another room
"anchored": true
}
]
}Response sample
{
"result": {
"actionIds": [
22903
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Example: Mapping a logic action with a logic condition
The following example describes how to map a logic action with a logic condition using the logics/CreateLogicActionMapping API:
API
POST{{Server_URL}}/api/services/app/logics/CreateLogicActionMapping
To learn more about the API, see Map conditions to actions for a logic blueprint.
Request sample
{
"logicId": 48165, // The logic
"conditionActionList": [
{
"conditionId": 53036, // The condition of the user entering a room
"actionId": 22903, // The interaction channel that contains triggered actions when a user enters the room
"logical_Operator": "And" // As one condition is linked to one action, this doesn't have any effect
}
]
}
Response sample
{
"result": {
"conditionActionMappingId": [
17375
]
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}
Step 3: Run the experience to collect initial data
Declaring end-users
Solutions powered by Affective Computing can have the following kinds of users based on function:
Internal users: Administrative or managerial users who might be responsible for tasks such as feeding actions and managing logic blueprints. Typically, they wouldn't interact with the solution directly as end-users, and can therefore be considered as internal users.
External users: Users who interact directly with the solution as end-users and participate in interactions. These users can be considered as external users. External users could be the same or a completely different set of users from internal users.
For more information, see Creating and managing end users of the solution.
When running the solution, you must declare your end-users (external users). Doing this generates their user ID, which is a unique ID that enables identifying them, tracking their activity, and monitoring the impact of the solution experience on them.
You can declare end-users using the ExternalUser/Create API. To learn more about the API, see Create a user ID for a new user.
Running interactions
From Affective Computing (powered by Virtue)'s point of view, running an experience simply means enabling the interactions between the solution (in this case, the design validation experience in VR) and users to execute (run).
To enable running your solution experience, for each type of interaction it includes, you must do the following:
Initialize interaction sessions using the
Interaction/InitializeSessionAPI.See the following example:
API
POST{{Server_URL}}/api/app/v2/Interaction/InitializeSessionTo learn more about the API, see Initialize an interaction session.
Request sample
{ "externalUserID":"50c91e36-3f69-4245-ad20-53db39d780c9", // unique identifier of the user "projectID": 493, // The project ID of the project "foreign_identity": "Validating whether entering room 1 is ideal during emergency evacuation", "language": "en", "client_ip": "10.1.192.128" }Response sample
{ "result": { "interaction_Session": 1269064, "isValidated": false, "force_user_calibration": false }, "targetUrl": null, "success": true, "error": null, "unAuthorizedRequest": false, "__abp": true }Interact using the
Interaction/InteractAPI. See the following example:API
POST{{Server_URL}}/api/app/v2/Interaction/InteractTo learn more about the API, see Perform an interaction.
Request sample
{ "interaction_Session": 35006, // session ID of the initialized interaction session "beats_Per_Minute":75, "time_Taken_To_Response": "1", "interact_type": "external_signal", "interact_value": "room1_ic", // Interaction channel that provides the input as an external signal "mode": "action" }Response sample
{ "result": { "sound": null, "statement": "", "question": "", "content": [], "music": "", "action": null, "interaction_channel_id": 530, // Interaction channel ID of the input channel configured as the interact value "triggered_action": "room1_ta", // Triggered action delivered as part of the interaction "last_stage": false, "repeat_stage": false, "audio_speed": "medium", "change_mode": null, "status": "success", "errorcode": "", "errorMessage": "" }, "targetUrl": null, "success": true, "error": null, "unAuthorizedRequest": false, "__abp": true }End interaction sessions using the
Interaction/end_interactionAPI. See the following example:API
POST{{Server_URL}}/api/app/v2/Interaction/end_interactionTo learn more about the API, see End an interaction session.
Request sample
{ "interaction_Session": 35006 }Response sample
{ "result": true, "targetUrl": null, "success": true, "error": null, "unAuthorizedRequest": false, "__abp": true }
When running an experience (by running interactions), you must do the following:
End-users must first complete the diagnostics process. To take them through the diagnostics process, you can use the
InteractionAPI. (For more information, see Running interactions.)After the diagnostics, end-users can start the experience by observing design options in VR. To perform this step, you can use the same
InteractionAPI in user calibration mode. Perform interactions with Affective Computing (powered by Virtue), providing calibration data. For voice interactions, you must convert audio to base64 and submit it through the respective interaction channel (specify it as theinteract_value).
During the whole process, Affective Computing (powered by Virtue) collects user responses and heart rate information so that the data can be augmented during the simulation phase (with VHs who are clones of the end-user).
Invoking the endpoints of the Interaction API produces the necessary data and defines the stages for simulating the interactions in the next phase. Doing this provides Affective Computing (powered by Virtue) with the necessary information to simulate the interactions of the VHs.
Step 4: Set up the simulation project
Similar to Step 1: Create the RDC project, you can create a project for simulation.
Typically, you can clone the RDC project and link the two. Cloning the project will ensure that your parameterization is consistent, especially as you don't need to configure different inputs in the two projects.
Example: Creating a simulation project that relates to an RDC project
The following example illustrates how to create a simulation project using the Project/Create API:
API
POST {{Server_URL}}/api/services/app/V2/Project/Create
To learn more about the API, see Create a project.
Request sample
{
"projectName": "Simulation - SIM",
"project_Domain_Id": 4, // 4 -> Digital user experience
"project_Function_Id": 12, // 12 -> Virtual soul recreation
"duplicate_Project_Id": 742, // The project ID of the Data_Collection project
"RelatesTo_Project_Id": 742, // The project ID of the Data_Collection project
"project_Purpose": "simulation_bulk", //The value for this parameter can be Data_Collection, Simulation, or simulation_bulk.
"isRecurringUser": false,
"productCalibrationStatus": false, // Deactivates product calibration mode for the simulation.
"interactionSetups": [
]
}Response sample
{
"result": {
"duplication_Summary": {
"feedingDataMapping": {
"127543": 127553,
"127544": 127554,
"127545": 127555,
"127546": 127556,
"127547": 127557
},
"contentAttributesMapping": {},
"interactionAttributesMapping": {},
"destinationEntity_Mapping": {
"14866": 14869
},
"interactionChannelsMapping": {
"2480": 2488,
"2481": 2489,
"2482": 2490,
"2483": 2491
},
"interactionChannelsInputCategoryMapping": {
"472": 474
},
"logicsMapping": {
"48165": 48212
},
"logicActionsMapping": {
"22903": 22940,
"22904": 22941
},
"logicConditionsMapping": {
"53036": 53085,
"53037": 53086
},
"triggeredActionAttributesMapping": {
"530": 534,
"531": 535
},
"destinationEntityEnvironmentObjects_Mapping": {
"663": 665
}
},
"projectName": "Simulation-SIM",
"duplicated_Project_Id": 742,
"duplicated_Project_Name": null,
"project_Domain_Id": 4,
"project_Function_Id": 12,
"project_Domain_Name": null,
"project_Function_Name": null,
"productCalibrationStatus": false,
"isRecurringUser": false,
"project_Purpose": "simulation_bulk",
"relatesTo_Project_Id": 742,
"project_Status": "draft",
"id": 745
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Parameterizing the simulation project
Data inputs
For the simulation, the data recipients are Virtual Humans (VHs) instead of real human users.
Environmental
Heat
Smoke
Objects
User motion
Declaration of rooms or sections within the building
Conversation initiation or trigger and answers
Notes:
You do not need to pass physiological data inputs during simulation because Affective Computing (powered by Virtue) manages the physiological responses of the VHs.
Similarly, you do not need to pass answers from any interactions to Affective Computing (powered by Virtue) as during simulations, NPCs would not ask VHs questions.
Entities
For the simulation phase, you can declare as many VHs you require.
Note: You don’t need to declare another environment for this project as the environment is controlled on a third-party level (by Unreal Engine) and provided as a direct input to each VH.
Objects
The environmental objects in the RDC project are also applicable to the simulation phase. (This is achieved when the RDC project is cloned.)
Actions and attributes
During the simulation, you do not require feedback questions for the VHs as the data collected during RDC is used to validate the emergency evacuation plan.
You must group actions that must be delivered to VHs when they are detected in each room under appropriate attributes and connect them to the respective interaction channel. This is required to ensure that all relevant actions are delivered at the appropriate time.
To simplify the implementation of your solution, we recommend using Unreal Engine for pathfinding rather than trying to achieve it through Affective Computing (powered By Virtue) logics.
Interaction channels
For the simulation project, you must configure the following:
The event of overcrowding should be an input channel (of type external signal) to enable Affective Computing (powered by Virtue) to understand that the actions declared in the RDC environment is the same as the one that needs to be used during simulation.
The alternate path taken should be declared as both input and output channels. This enables EDAA™ to identify the room where the EDNPC can go next and deliver it through the channel. (The actual pathfinding is dealt with at the Unreal Engine level).
Logics
You do not require environment and diagnostics logics for the simulation.
You must declare one logic for each VH with the following conditions:
Movement: This is the default condition after which Affective Computing (powered by Virtue) delivers the action triggered when a VH enters a room.
Crowd: Unreal Engine detects overcrowding and notifies Affective Computing so that it can consider this information when delivering the action of choosing a different room.
Step 5: Generate data for simulation
To generate the required data to run the simulation, you must do the following:
Generate raw data
Completing this task is the same as running the experience using the RDC project.
When the InitializeSession, Interact, and end_interaction endpoints of the Interaction API are called for each interaction between your solution and end-users, the data, actions, and stages (successful interactions) are produced.
Retrieving stages
The Interaction API can return the list of stages with their interaction types, such as voice, text, QR, or external signal.
It includes parameters such as interact_value , beats_Per_Minute (heart rate), and more, that are required for the Simulation.
To learn more about the APIs, see:
Augment raw data for simulation
Data augmentation is the process of creating variations of the data collected during RDC for the purpose of simulation.
The following example illustrates how to begin data augmentation for bulk simulation using the Interaction/StartDataAugmentation_bulk endpoint:
API
POST{{Server_URL}}/api/services/app/v2/Interaction/StartDataAugmentation_bulk
To learn more about the API, see Begin data augmentation for bulk simulations.
Request sample
{
"session_ID": 31491, // Interaction session ID (stage) from the data collection to be augmented
"simulation_Project_ID": 146, // The project ID of the data collection project
"logics_Amount_To_Generate": 5
}
Response sample
{
"result": {
"augmentation_Patch_ID": 31491,
},
"targetUrl": null,
"success": true,
"error": null,
"unAuthorizedRequest": false,
"__abp": true
}Note: In an upcoming version of Affective Computing, this API will support user persona attribute parameters, which would enable selective data augmentation over the data of only the users whose personas fit under those attributes.
This would allow you to augment a subset of raw data rather than the whole raw data after it is collected in the RDC project.
After collecting raw data and augmenting it, if you add additional actions to your project, you must perform both these tasks again; you must perform the corresponding interactions (that deliver the new actions to users) and repeat data augmentation.
Step 6: Run the simulation
Unlike RDC, in which a few real human users perform the interactions to set up a baseline, in the simulation, the users are Virtual Humans (VHs), who are EDAA™-powered emotionally-driven NPCs augmented from the profiles of the real users.
When running a simulation, you must do the following:
Summon the VHs
At the beginning of running the simulation, you must summon your VH entities (bring them to life) so that they can start interacting with the solution.
Ensure that you summon EDNPCs before starting the simulation.
Example: Summoning a VH
The following example illustrates how to summon a VH augmented from the psychological profile of an existing user using the SummonNPC endpoint in the Interaction API.
The API creates a virtual twin of the specified real human user who was involved in the raw data collection process.
This API can accept multiple entries to bulk-summon the VHs for the simulation project.
API
POST{{Server_URL}}/api/v2/Interaction/SummonNPC
To learn more about the API, see Summon a VH.
Request sample
{
"entity_ID": 342543, // Entity ID of the VH augmented from a real end-user
"user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile must power the VH
"projectId": 34245
}Response sample
{
"id": 123, // VH ID of the summoned VH
"projectId": 34245,
"user_ID": "john_doe", // User ID of the real end-user whose augmented psychological profile powers the VH
"entity_ID": 342543,
"active": true,
"lastActiveTime": "2024-11-23T15:24:54.465Z"
"creationTime": "2024-11-23T15:24:54.465Z"
"endTime": null
}You must store the retrieved VH IDs so that you can kill (terminate) the VHs at the end of the simulation.
Validate availability of VHs
VHs can have the three possible statuses; Offline , Preparing , Online.
Your summoned VHs are only ready for use in interactions when they are in the Online status. Otherwise, if you try to perform an interaction using them, an error will occur as their instance isn't online yet.
You can use the SummonedEntityStatus endpoint of the Interaction API to view the availability statuses of summoned VHs. To learn more about the API, see View the status of all summoned VHs.
Simulate interactions
Running a simulation simply means running a script in Unreal Engine that sequentially executes the different interactions that, in sum, make up the simulated experience. Do the following:
List and loop all stages retrieved when generating data for the simulation (during Step 5: Generate data for simulation).
For each stage, replace the parameters with those from the simulation and initialize, run, and end the interactions using the
InteractionAPI. (For more information, see Running interactions.) This step represents actually running the simulation and enabling the VHs to interact with each other, the environment, and its objects to validate your solution. Note: To initialize interaction sessions, you can do one of the following:Initialize a new interaction session using the
Interaction/InitializeSessionAPIRetrieve the engagement from an existing session in the cloned project using the
Reporting/GetEngagementBySessionAPI.
Best practice
Affective Computing (powered by Virtue) enables you to resume a simulation at any point if it is interrupted manually or due to any other reason. However, to continue the simulation with the same situational context as the point of interruption, details such as the session IDs, entity IDs, and the number of perceptions are required.
Therefore, as a best practice, when running simulations, we recommend saving this information at your end.
Terminate the VHs
After completing the simulation (looping through all the experience data), you must terminate the VHs.
Example: Terminating a VH
The following example illustrates how to terminate a VH using the KillNPC endpoint in the Interaction API:
You can use this API to bulk-terminate multiple VHs.
API
POST{{Server_URL}}/api/v2/Interaction/KillNPC
To learn more about the API, see Terminate the VHs.
Request sample
{
"npc_ID": 123, // Entity ID of summoned VH
}Response sample
{
"id": 123, // VH ID of the summoned VH
"projectId": 34245,
"user_ID": "john_doe", // User ID of the real end-user whose augmented psychological profile powers the VH
"entity_ID": 342543,
"active": false,
"lastActiveTime": "2024-11-28T12:32:57.465Z"
"creationTime": "2024-11-23T15:24:54.465Z"
"endTime": "2024-11-28T12:32:57.465Z"
}Step 7: Analyze its results
You can analyze the level of engagement in each session to validate your solution's performance. You can do this using the Reporting/GetEngagementBySession API. See the following example:
API
GET{{Server_URL}}/api/services/app/Reporting/GetEngagementBySession.
To learn more about the API, see Viewing insights from project data.
Request sample
{
"sessionId": 238944,
"projectID": {{simulation_projectID}}, // The project ID of the bulk simulation project
"foreign_identity": "Interaction session for conversation interaction",
"language": "en",
"client_ip": "10.1.192.128"
}
Response sample
{
"result": [
{
"sessionId": 238944,
"engagement": 0.0,
"stage": 0,
"action": "",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 1,
"action": "Hello , how are you ?",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 1.0,
"stage": 4,
"action": "I have no idea, sorry",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 5,
"action": "I am not feeling ready yet to speak about this topic ",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 6,
"action": "I need more experience to answer this sorry ",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 50.0,
"stage": 7,
"action": "I am training hard to give you a good answer about this in the near future ",
"entityId": 16812,
"object_status": []
},
{
"sessionId": 238944,
"engagement": 38.0,
"stage": 8,
"action": "I have no idea, sorry",
"entityId": 16812,
"object_status": []
}
],
"targetUrl": null,
"success": true,Additionally, based on the requirements of your solution, you can use an external tool or Unreal Engine to visualize and consume the results. It depends on how you define solution validation.
Last updated
Was this helpful?