Tutorial: Validating designs in digital twin simulations with clones

A step-by-step guide to building an example solution that validates a design in its digital twin through clone simulation

This tutorial explores an example solution powered by Virtue's framework that simulates real human experiences, particularly focusing on interactions with various objects and the display of emotions of clones (Virtual Humans in a 3D environment).

During the simulation, VHs (powered by psychological profiles cloned from real humans) participate in an experiential scenario, which is a virtual recreation of a real experience. This enables you to gain insights about how multiple variables could impact humans psychologically in the real-life version of the experience.

Business case

A company that designs home interiors wants to validate design options with their customers in Virtual Reality, so that they can design and deliver the best possible room designs.

Challenges

Designing home interiors has the following challenges:

  1. Once a room's fit-out work is completed, making changes or rebuilding is expensive and impractical.

  2. As a wide variety of design choices exist, the design process can be overwhelming for both the company and customers.

  3. Couples, such as a husband and wife, may have conflicting preferences, complicating the decision-making process.

Designing and implementing a Clone Simulation solution

Leveraging Affective Computing by Virtue's framework when implementing your solution for this business case is an excellent choice.

You can take advantage of features such as simulation to recreate various scenarios and observe their impact on emotionally-driven digital characters (Virtual Humans or VHs), thereby streamlining the process of validating the experience.

Overview

An Affective Computing by Virtue-powered solution that aims at personalizing end-user experiences through clone simulation typically consists of the following phases:

We recommend researching and understanding your solution's purpose and goals and the specifics of its real-world implementation, including procuring and setting up all necessary devices and connecting any external tools before starting to implement your Affective Computing by Virtue-powered solution.

To learn more, see How to set up a solution.

This tutorial focuses on the concepts and steps related to setting up and parameterizing your project after you have all of these things in place.

You must also first design the end-to-end experience workflow and define how you want to measure your solution's success. For example, the workflow could include the following steps:

  • A designer showcases an initial room design to customers in an interactive Virtual Reality experience, which is built and configured using a 3D engine such as Unreal Engine.

  • (Diagnostics) Customers answer questions to enable EDAA™ to generate their preliminary psychological profiles.

  • The interactive VR experience includes changing materials in real time based on customer input. Reactions of customers (such as their speech and heart rate) are recorded.

  • Based on the responses, the designer identifies specific objects and materials for further validation.

  • The experience is simulated using clones (VHs whose psychological profiles are cloned from the customers' profiles). The VHs interact with the selected materials and their level of engagement with each object and material type is observed.

  • A report is generated to help customers make informed decisions about the materials and objects based on the engagement results.

Diagnostics and calibration

Diagnostics is the process in which EDAA™, the underlying framework of Affective Computing by Virtue, analyzes and establishes a preliminary (baseline) psychological profile for each user.

Calibration is the process of validating and adjusting each user's previously-established psychological profile based on their current psychological state.

For detailed information, see Diagnostics and calibration.

Both these processes are important parts of data collection, which is the first phase of implementing the solution. Data collection provides your solution with baselines, context, and boundaries.

To collect this initial data, you can either run a real experience or input synthetic data from an external data set.

However, as this solution aims to provide highly-personalized interior design to customers based on their preferences (which can only be achieved satisfactorily if the initial data is obtained directly from the clients), we recommend running the event of raw data collection.

Raw data collection

Before simulating an experience, Affective Computing by Virtue must be made aware of the boundaries of each situation. In addition to better solution performance, doing this prevents AI hallucinations.

The event of raw data collection (RDC) can help you achieve this. RDC simply means running the experience provided by your solution with real human users.

This process introduces the reality of the experience and provides context when situations similar to the ones that the real human users face are simulated using Virtual Humans (VHs).

To learn more about RDC, see Importance of raw data collection for simulation.

Simulation

Simulation enables you to replicate the situations that the real human users face and observe their effects over a custom period of time. This is the next phase of implementing the solution.

Simulation means running an experience in a virtual environment using VHs, whose psychological profiles are cloned or augmented from that of the real human users.

Type of simulation
Description

Clone simulation

A type of experience simulation in which each VH has a psychological profile that matches that of a specific real human end-user. This tutorial aims to help you understand how to implement clone simulation.

Bulk simulation

A type of experience simulation in which VHs are not 1:1 clones of real human end-users, but instead based on augmented data from real users’ psychological profiles.

For more information, see Simulation.

Result analysis

In the final phase of implementing the solution, you can analyze user engagement levels in each interaction session to gain data insights and visibility into the solution's performance.

Additionally, depending on the specifics of your solution's requirements, design, and definition of success, you can integrate external data analytics tools to visualize the results, generate reports, and gain the specific insights you require.

Implementing the solution

The process of implementing the solution consists of the following steps:

Working with our APIs

As an internal user responsible for setting up and managing an Affective Computing by Virtue project, you must use the API key and access token that enables you to integrate your front end with Orchestra to work with our APIs.

For more information, see Authenticating your data source to use our APIs.

Step 1: Create the RDC project

You can create a project for raw data collection (RDC) by following the procedure described in Creating and managing projects. You can use this project to run multiple RDC sessions.

Example: Creating a project

The following example illustrates how to create a project for the purpose of RDC using the Project/Create API:

You can use the project_Purpose parameter to configure the purpose of the project. In this case, you can configure its value as Data_Collection.

API

POST {{Server_URL}}/api/services/app/V2/Project/Create

To learn more about the API, see Create a project.

Request sample

{
   "projectName":"Validating Designs - RDC",
   "project_Domain_Id":4, // 4 -> Digital user experience
   "project_Function_Id":13, //13 -> Neuro-architecture
   "project_Purpose":"Data_Collection", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
   "isRecurringUser":false,
   "productCalibrationStatus":false,
   "interactionSetups":[
      {
         "interaction_setup_id":2,
         "interaction_mode":"diagnostics" // Configuring the data input for Heart Rate for diagnostics
      },
      {
         "interaction_setup_id":3,
         "interaction_mode":"diagnostics" // Configuring the data input for Speech for diagnostics
      },
      {
         "interaction_setup_id":2,
         "interaction_mode":"action" // Configuring the data input for Heart Rate for action mode
      },
      {
         "interaction_setup_id":3, // Configuring the data input for Speech for action mode
         "interaction_mode":"action" 
      },
      {
         "interaction_setup_id":26, // Configuring the data input for external signals for action mode
         "interaction_mode":"action"
      },
      {
         "interaction_setup_id":2, // Configuring the data input for Heart Rate for UC mode
         "interaction_mode":"user_calibration"
      },
      {
         "interaction_setup_id":3, // Configuring the data input for Speech for UC mode
         "interaction_mode":"user_calibration"
      },
      {
         "interaction_setup_id":26, // Configuring the data input for external signals for UC mode
         "interaction_mode":"user_calibration"
      }
   ]
}

Response sample

{
    "result": {
        "projectName": "Validating Designs - RDC",
        "duplicated_Project_Id": null,
        "duplicated_Project_Name": null,
        "project_Domain_Id": 4,
        "project_Function_Id": 13,
        "project_Domain_Name": null,
        "project_Function_Name": null,
        "productCalibrationStatus": true,
        "isRecurringUser": false,
        "project_Purpose": "Data_Collection",
        "relatesTo_Project_Id": null,
        "project_Status": "draft",
        "id": 1234
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Step 2: Parameterize the RDC project

After creating the RDC project, you can parameterize it to customize the behavior of Affective Computing by Virtue according to the requirements of your solution.

For detailed information about parameterization, see Parameterizing a project.

During this step, you can do the following:

Parameterize data inputs

You must declare (define) data inputs for each type of (raw or synthetic) external data you want to utilize in your solution. You can do this at the same time as creating the project.

For this project, you can configure the following data inputs:

Recipient
Input type
Input

Real human user

Physiological

  • Heart rate, through a wearable heart rate sensor

  • Speech, through a microphone

User motion

  • Conversation initiation or trigger and answers (Time to answer, number of words, pauses between words, duration of the answer, meaning, and more)

  • Interactions with a front-end of an application depending on how the experiences is designed

For more information, see Understanding data inputs.

Example: Configuring data inputs

The following example illustrates how to declare data inputs using the interactionsSetups parameter of the Project/Create API.

This parameter enables you to declare inputs and their respective operational modes.

API

POST {{Server_URL}}/api/services/app/V2/Project/Create

To learn more about the API, see Create a project. Also see Viewing available data inputs (interaction setups).

Request sample

{
   "projectName":"Validating Designs - RDC",
   "project_Domain_Id":4, // 4 -> Digital user experience
   "project_Function_Id":13, //13 -> Neuro-architecture
   "project_Purpose":"Data_Collection", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.
   "isRecurringUser":false,
   "productCalibrationStatus":false,
   "interactionSetups":[
      {
         "interaction_setup_id":2,
         "interaction_mode":"diagnostics" // Configuring the data input for Heart Rate for diagnostics
      },
      {
         "interaction_setup_id":3,
         "interaction_mode":"diagnostics" // Configuring the data input for Speech for diagnostics
      },
      {
         "interaction_setup_id":2,
         "interaction_mode":"action" // Configuring the data input for Heart Rate for action mode
      },
      {
         "interaction_setup_id":3, // Configuring the data input for Speech for action mode
         "interaction_mode":"action" 
      },
      {
         "interaction_setup_id":26, // Configuring the data input for external signals for action mode
         "interaction_mode":"action"
      },
      {
         "interaction_setup_id":2, // Configuring the data input for Heart Rate for UC mode
         "interaction_mode":"user_calibration"
      },
      {
         "interaction_setup_id":3, // Configuring the data input for Speech for UC mode
         "interaction_mode":"user_calibration"
      },
      {
         "interaction_setup_id":26, // Configuring the data input for external signals for UC mode
         "interaction_mode":"user_calibration"
      }
   ]
}

Response sample

{
    "result": {
        "projectName": "Validating Designs - RDC",
        "duplicated_Project_Id": null,
        "duplicated_Project_Name": null,
        "project_Domain_Id": 4,
        "project_Function_Id": 13,
        "project_Domain_Name": null,
        "project_Function_Name": null,
        "productCalibrationStatus": true,
        "isRecurringUser": false,
        "project_Purpose": "Data_Collection",
        "relatesTo_Project_Id": null,
        "project_Status": "draft",
        "id": 1234
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Parameterize entities and objects

Affective Computing by Virtue supports the following types of entities:

Entity type
Description

Environment

The 3D environment where the real or virtual human can users interact with the solution.

Virtual Humans (VHs)

Emotionally-driven NPCs (powered by EDAA™) that interact with the solution during simulation.

The environment entity represents the environment in which users would participate in the experience.

The actual physics and 3D aspects of the environment can be designed, controlled, and managed on a 3rd-party level using Unreal Engine or any other 3D engine.

When parameterizing the RDC project, as the participants are real human users, you do not need to declare VH entities.

Objects are the items in the environment with which your (real or virtual) end users can interact. They can serve as a reference for data visualization or as channels to reflect status updates.

For your solution, you can declare all the objects that you want to include in the interior design, such as walls, surfaces, sinks, faucets, light fixtures, panels, and more.

For more information, see:

Example: Creating an entity

The following example illustrates how to create an environment entity using the DestinationEntity/CreateEntities API:

You can use the destination_Entity_Types_Id parameter to specify whether you want to create a 3D environment or NPC (VH) entity. As you are creating an environment in this example, the value of this parameter is 2.

When invoking this API to declare Virtual Humans (VHs) for your simulation project, you can configure its value as 1.

API

POST {{Server_URL}}/api/services/app/DestinationEntity/CreateEntities

To learn more about the API, see Create an entity.

Request sample

{
    "destination_Entity_Types_Id": 2, // 2 => 3D Environment
    "projectId": {{data_collection_projectID}}, // The project ID of the Data_Collection (RDC) project
    "entityList":
    [
        {
            "name": "3BHKHome1",
            "description": "3 BHK apartment prototype - environment to be used for data collection”,
            "identifier": "ENV_3BHKO1"
        }
    ]
}

Response sample

{
    "result": [
        139979
    ],
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Example: Creating an object

The following example illustrates how to create an object using the DestinationObject/CreateObjects API:

API

POST {{Server_URL}}/api/services/app/DestinationObject/CreateObjects

To learn more about the API, see Create an object.

Request sample

{
  "destination_Entity_Environment_Id": 139979, // The environment in which the object must be created
  "destinationObjectList": [
    {
        "name": "Room 1 Wall Panel Wood",
        "description": "Wooden wall panel in room 1",
        "identifier": "room1_wall_panel_wood"
    }]
}

Response sample

{
    "result": [
        663
    ],
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Parameterize actions and attributes

Affective Computing by Virtue supports the following categories of actions:

Action category
Description

Content

Delivering any media, such as images, video, or sound.

Interactions

Delivering statements or asking questions

Triggered actions

Delivering an action as a response to specific events or conditions.

As the scope of the solution described in this tutorial only includes interactions and triggered actions, you only need to parameterize actions of these categories.

As your solution aims to personalize interior design based on user preferences, you can define changes to objects based on all possible variations you can offer as triggered actions. For example, if you can provide 15 different light fixture options, kitchen platforms of 10 different materials, and 5 different wall paper designs, you must declare all combinations as triggered actions.

As your solution also relies on EDAA™ to ask questions to the participants, you must declare the questions as interactions and design them based on what you are trying to validate. For example, you can design a question to ask users how they feel about a particular color palette.

We also recommend thoughtfully planning who would ask the questions.

As humans wouldn't typically encounter questions from an unseen voice, you could introduce an object in your solution that is implemented as an NPC in Unreal Engine; during the experience, an NPC would approach a user and ask them the question.

Attributes personalize interactions between your solution and end-users by shaping and classifying actions. They can be considered as "folders" that group related or similar actions.

For example, an attribute called boho_design can contain the actions that must be delivered to implement a bohemian vibe in the interior design.

Similar to actions, attributes are also categorized as content attributes, interaction attributes, and triggered action attributes.

You must group your actions under appropriate attributes of the appropriate category. For example, all questions posed to users in Room 1 can be grouped under an interaction attribute called room1_questions.

For more information, see:

Example: Creating an action

The following example illustrates how to create a triggered action using the FeedingData/Create API. The action in this example is designed to change the light fixture in Room 1 to a jute chandelier:

You can use the feeding_Action_Category_ID parameter to configure the action category. In this example, its value is configured as 3, which denotes triggered actions.

API

POST {{Server_URL}}/api/services/app/v2/FeedingData/Create

To learn more about the API, see Add an action.

Request sample

{
  "projectId": {{projectID}}, // The project ID of the project
  "identity": "room1_light_fixture_jute_chandelier",
  "feeding_Value": "room1_light_fixture_jute_chandelier",
  "feeding_Action_Category_ID": 3, // 3 => Triggered action (The action category)
  "feeding_Action_Type_ID": 18, // 18 => Value (The action type) 
  "isCopyRighted": true,
  "isDiagnostics": false,
  "isVerified": true
}

Response sample

{
    "result": {
        "projectId": 742,
        "identity": "room1_light_fixture_jute_chandelier",
        "feeding_Value": "room1_light_fixture_jute_chandelier",
        "feeding_Action_Type_ID": 18,
        "feeding_Action_Category_ID": 3,
        "isImported": false,
        "isCopyRighted": true,
        "isDiagnostics": false,
        "isVerified": true,
        "id": 127544
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Example: Creating an attribute

The following example illustrates how to create a triggered action attribute (to group triggered actions) using the FeedingTriggeredActionAttribute/Create API. In this example, the attribute groups the actions triggered in the living room of the apartment:

API

POST {{Server_URL}}/api/services/app/FeedingTriggeredActionAttribute/Create

To learn more about the API, see Create a triggered action attribute.

Request sample

{
  "projectId": {{data_collection_projectID}}, // The project ID of the project
  "name": "living_room",
  "action_Type_ID": 18, // 18 => Value (The action type)
  "feedingDataIds": [
    127544,127545,127546 // The triggered actions that are grouped under the attribute
  ]
}

Response sample

{
    "result": {
        "id": 530,
        "name": "room1",
        "isDeleted": false,
        "isImported": false,
        "projectId": 742,
        "project_Domain_Id": null,
        "project_Function_Id": null,
        "action_Type_ID": 18,
        "tenantName": null,
        "projectName": null,
        "project_Domain_Name": null,
        "project_Function_Name": null,
        "feedingDatasIds": null
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Parameterize interaction channels

You must declare interaction channels for each (type of) interaction you want to set up between your solution and end-users.

Affective Computing by Virtue supports the following types of interaction channels:

Type
Description

Input channel

These channels enable EDAA™ to receive information to change the state of something.

Output channel

These channels enable EDAA™ to channel an action through something.

You can design your solution experience to enable users to simply touch an object (in VR) to trigger the action of changing a property (such as its color or material). In this case, you can configure the object as both an input and output channel.

For more information, see:

Example: Creating an input channel

The following example illustrates how to create an input interaction channel using the InteractionChannel/Create API:

You can use the interaction_Channel_Types_Id parameter to configure the channel type, which determines the direction of data flow in the channel.

In this example, its value is configured as 1, which denotes an input interaction channel.

API

POST {{Server_URL}}/api/services/app/v2/InteractionChannel/Create

To learn more about the API, see Create an interaction channel.

Request sample

{
  "projectId": {{data_collection_projectID}}, // The project ID of the project
  "interaction_Channel_Types_Id": 1, // 1 => Input (Channel type)
  "interaction_Input_Types_Id": 6, // 6 => Object (Input type)
  "identifier": "wall_panels_ic",  
  "value": "wooden_wall_panels",
  "active": true,
  "interaction_Input_Category_Id": 472 // The triggered action attribute category that groups actions triggered when a user touches the object
}

Response sample

{
    "result": {
        "tenantId": 6,
        "projectId": {{data_collection_projectID}},
        "project_Domain_Id": null,
        "project_Function_Id": null,
        "interaction_Input_Types_Id": 6, // 6 => Object (Input type)
        "tenantName": null,
        "projectName": null,
        "identifier": "wall_panels_ic",
        "value": "wooden_wall_panels",
        "active": true,
        "interaction_Input_Category_Id": 472,
        "interaction_Input_Category_Name": "wall_panels_options",
        "triggered_Action_Name": null,
        "triggered_Action_Id": null,
        "isActive": true,
        "destination_Entity_Name": null,
        "destination_Entity_Object_Name": null,
        "destination_Entity_Types_Id": null,
        "destination_Entity_Types_Name": null,
        "destination_Entity_Id": null,
        "destination_Entity_Object_Id": null,
        "interaction_Channel_Types_Id": 1,
        "id": 2480
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Example: Creating an output channel

The following example illustrates how to create an output interaction channel using the InteractionChannel/Create API:

You can use the interaction_Channel_Types_Id parameter to configure the channel type, which determines the direction of data flow in the channel.

In this example, its value is configured as 2, which denotes an output interaction channel.

API

POST{{Server_URL}}/api/services/app/v2/InteractionChannel/Create

To learn more about the API, see Create an interaction channel.

Request sample

{
  "projectId": {{data_collection_projectID}}, // The project ID of the project
  "interaction_Channel_Types_Id": 2, // 2 => Output (Channel type)
  "identifier": "wall_panels_oc",  
  "value": "pvc_slats",
  "active": true,
  "destination_Entity_Object_Id": 1234, // The object ID of the object that changes, i.e., in this case, the wall panel 
  "Triggered_Action_Attribute_Id": 530, // The triggered action attribute category that groups the required triggered actions
}

Response sample

{
    "result": {
        "tenantId": 6,
        "projectId": {{data_collection_projectID}},
        "project_Domain_Id": null,
        "project_Function_Id": null,
        "interaction_Input_Types_Id": null,
        "tenantName": null,
        "projectName": null,
        "identifier": "wall_panels_oc",
        "value": "pvc_slats",
        "active": true,
        "interaction_Input_Category_Id": null,
        "interaction_Input_Category_Name": null,
        "triggered_Action_Name": null,
        "triggered_Action_Id": null,
        "isActive": true,
        "destination_Entity_Name": null,
        "destination_Entity_Object_Name": "Wall panel",
        "destination_Entity_Types_Id": 2,
        "destination_Entity_Types_Name": null,
        "interaction_Channel_Types_Id": 2,
        "id": 2481
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Parameterize logics

Logics define (or modify) your solution's behavior and enable you to personalize interactions.

A logic has the following components:

Component
Mandatory?
Description

Activator

The recipient of the logic based on psychological profile (except in the case of diagnostics logics).

Condition

All events that can trigger the logic.

Action

The resulting action that needs to be delivered by EDAA™. It can either be one specific action or any action grouped under an attribute

Operators

Logical operators (AND and OR) that define the flow of the logic and enable you to introduce multiple conditions and rules of interdependence between conditions and actions

  • You can anchor actions to ensure that EDAA™ doesn’t change (generatively evolve) it and doesn’t generate any actions inside the parent attribute.

  • You can anchor logics to ensure that EDAA™ doesn’t generate new logics based on it and always uses it as-is. However, anchoring logics reduces personalization.

For the RDC project, you must create the following logics:

Logic
Description
Definition

Diagnostics

Used for the diagnostics process. It is mandatory to create them for the RDC project.

Logic component
What to configure

Activator

All new users

Condition

Whenever a new user is detected by EDAA™ (and is interacting with the solution for the first time)

Action

A set of questions to help EDAA™ establish the preliminary psychological profile of the user:

  • Five initial questions are pre-defined by EDAA™.

  • (Optional) You can create 3 additional questions to establish the user's persona type (to activate specific actions leveraging attribute name.)

For feedback questions

Defines how EDAA™, through NPCs, can interact with users to validate the proposed design.

Logic component
What to configure

Activator

All profiles

Condition

A design option was presented to the user.

Action

The attribute that contains the questions. Note: If you want to ask a specific question (for example, about a specific design option),you can set up the specific action instead of configuring an attribute from which actions are selected.

Personalizing object design

Defines how object designs (for example, color or material) are updated based on user preferences.

Logic component
What to configure

Activator

All profiles

Condition

The user interacts with an object.

Action

The attribute that contains alternate design options for the object as triggered actions.

For more information, see:

Example: Creating a logic blueprint

The following example describes how to create a logic blueprint using the logics/CreateLogic API:

API

POST {{Server_URL}}/api/services/app/logics/CreateLogic

To learn more about the API, see Create a logic blueprint.

Request sample

{
  "logicName": "room1_logics",
  "bluePrinting_logics_type_id": 2, // 2 => User calibration (logic type)
  "projectId": {{data_collection_projectID}}, // The project ID of the project
  "activator": 1, // 1 => Profile (Logic activator)
  "activator_Type": 22, // 2 => All profiles (Logic activator type)  
  "anchored": true
}

Response sample

{
    "result": {
        "logicId": 48165
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Example: Creating a logic condition

The following example describes how to create and configure the condition that triggers a logic using the logics/CreateLogicCondition API:

API

POST{{Server_URL}}/api/services/app/logics/CreateLogicCondition

To learn more about the API, see Add a logic condition to a logic blueprint.

Request sample

{
  "logicId": 48165,
  "logicConditionList": [
    {
      "condition_id": 4, // 4 => Environmental (condition)
      "condition_type_id": 47, // 47 => Object (condition type)
      "logical_Operator" : "Or",
      "logicConditionValuesList": [
        {
        "interaction_Input_Id": 2480 //The input of the wall_panels object
        }
      ]
    }, {
      "condition_id": 4, // 4 => Environmental (condition)
      "condition_type_id": 38, // 38 => External signal (condition type)
      "logical_Operator" : "Or",
      "logicConditionValuesList": [
        {
   "interaction_Input_Id": 2482 //Input based on question asked about wall panels
        }
      ]
    }
}

Response sample

{
    "result": {
        "condition_Ids": [
            53036, //The condition ID for object interaction
            53037//The condition ID for feedback
        ]
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Example: Creating a logic action

The following example describes how to configure the action delivered when a logic is triggered using the logics/CreateLogicAction API:

API

POST{{Server_URL}}/api/services/app/logics/CreateLogicAction

To learn more about the API, see Add a logic action to a logic blueprint.

Request sample

{
  "logicId": 48165,
  "logicActionList": [
    {
      "execution_order": 0,
      "feeding_content_Direction_Id": 3, // 3 => Output as interaction channel
      "action_type_Id": 4, // 4 => Triggered action
      "interaction_Channels_Id": 2481, // The channel for replacing the object
      "anchored": true
    }
  ]
}

Response sample

{
    "result": {
        "actionIds": [
            22903
        ]
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Example: Mapping a logic action with a logic condition

The following example describes how to map a logic action with a logic condition using the logics/CreateLogicActionMapping API:

API

POST{{Server_URL}}/api/services/app/logics/CreateLogicActionMapping

To learn more about the API, see Map conditions to actions for a logic blueprint.

Request sample

{
  "logicId": 48165, // The logic
  "conditionActionList": [
    {
      "conditionId": 53036, // The condition of the user selecting an object
      "actionId": 22903, // The interaction channel that contains triggered actions of possible design options for the object
      "logical_Operator": "And" // As one condition is linked to one action, this doesn't have any effect
    }
  ]
}

Response sample

{
    "result": {
        "conditionActionMappingId": [
            17375
        ]
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Step 3: Run the experience to collect initial data

Declaring end-users

Solutions powered by Affective Computing can have the following kinds of users based on function:

  • Internal users: Administrative or managerial users who might be responsible for tasks such as feeding actions and managing logic blueprints. Typically, they wouldn't interact with the solution directly as end-users, and can therefore be considered as internal users.

  • External users: Users who interact directly with the solution as end-users and participate in interactions. These users can be considered as external users. External users could be the same or a completely different set of users from internal users.

For more information, see Creating and managing end users of the solution.

When running the solution, you must declare your end-users (external users). Doing this generates their user ID, which is a unique ID that enables identifying them, tracking their activity, and monitoring the impact of the solution experience on them.

You can declare end-users using the ExternalUser/Create API. To learn more about the API, see Create a user ID for a new user.

Running interactions

From Affective Computing (powered by Virtue)'s point of view, running an experience simply means enabling the interactions between the solution (in this case, the design validation experience in VR) and users to execute (run).

To enable running your solution experience, for each type of interaction it includes, you must do the following:

  1. Initialize interaction sessions using the Interaction/InitializeSession API.

    1. See the following example:

    API

    POST {{Server_URL}}/api/app/v2/Interaction/InitializeSession

    To learn more about the API, see Initialize an interaction session.

    Request sample

    {
        "externalUserID":"50c91e36-3f69-4245-ad20-53db39d780c9", // unique identifier of the user
        "projectID": 493, // The project ID of the project
        "foreign_identity": "Determining whether the user likes the light fixture",
        "language": "en",
        "client_ip": "10.1.192.128"
    }

    Response sample

    {
        "result": {
            "interaction_Session": 1269064,
            "isValidated": false,
            "force_user_calibration": false
        },
        "targetUrl": null,
        "success": true,
        "error": null,
        "unAuthorizedRequest": false,
        "__abp": true
    }
  2. Interact using the Interaction/Interact API. See the following example:

    API

    POST {{Server_URL}}/api/app/v2/Interaction/Interact

    To learn more about the API, see Perform an interaction.

    Request sample

    {
        "interaction_Session": 35006, // session ID of the initialized interaction session
        "beats_Per_Minute":75,   
        "time_Taken_To_Response": "1",  
         "interact_type": "external_signal", 
         "interact_value": "light_fixture_ic", // Interaction channel that provides the input as an external signal
         "mode": "action"
     }

    Response sample

    {
        "result": {
            "sound": null,
            "statement": "",
            "question": "",
            "content": [],
            "music": "",
            "action": null,
            "interaction_channel_id": 530, // Interaction channel ID of the input channel configured as the interact value
            "triggered_action": "light_fixture_ta", // Triggered action delivered as part of the interaction
            "last_stage": false,
            "repeat_stage": false,
            "audio_speed": "medium",
            "change_mode": null,
            "status": "success",
            "errorcode": "",
            "errorMessage": ""
        },
        "targetUrl": null,
        "success": true,
        "error": null,
        "unAuthorizedRequest": false,
        "__abp": true
    }
  3. End interaction sessions using the Interaction/end_interaction API. See the following example:

    API

    POST {{Server_URL}}/api/app/v2/Interaction/end_interaction

    To learn more about the API, see End an interaction session.

    Request sample

    {
      "interaction_Session": 35006
    }

    Response sample

    {
        "result": true,
        "targetUrl": null,
        "success": true,
        "error": null,
        "unAuthorizedRequest": false,
        "__abp": true
    }

When running an experience (by running interactions), you must do the following:

  1. End-users must first complete the diagnostics process. To take them through the diagnostics process, you can use the Interaction API. (For more information, see Running interactions.)

  2. After the diagnostics, end-users can start the experience by observing design options in VR. To perform this step, you can use the same Interaction API in user calibration mode:

    1. Perform interactions with Affective Computing (powered by Virtue), providing calibration data:

      1. For voice interactions, you must convert audio to base64 and submit it through the respective interaction channel (specify it as the interact_value).

      2. When an object is changed in an interaction (i.e. an end-user dislikes an option and therefore the designer changes it), submit the object ID and the corresponding triggered action ID that reflects the new value.

Invoking the endpoints of the Interaction API produces the necessary data and defines the stages for simulating the interactions in the next phase. Doing this provides Affective Computing (powered by Virtue) with the necessary information to simulate the interactions of the VHs.

Step 4: Set up the simulation project

Similar to Step 1: Create the RDC project, you can create a project for simulation.

Example: Creating a simulation project that relates to an RDC project

The following example illustrates how to create a simulation project using the Project/Create API:

When creating the simulation project, the relatesTo_Project_Id parameter must refer to the project ID of the data collection project. This is how the link between the two projects is established.

API

POST{{Server_URL}}/api/services/app/V2/Project/Create

To learn more about the API, see Create a project.

Request sample

{
  "projectName": "Simulation - SIM",
  "project_Domain_Id": 4, // 4 -> Digital user experience
  "project_Function_Id": 12, // 13 -> Neuro-architecture
  "duplicate_Project_Id": 742, // The project ID of the Data_Collection project
  "RelatesTo_Project_Id": 742, // The project ID of the Data_Collection project
  "project_Purpose": "Simulation", //The value for this parameter can be General, Data_Collection, Simulation, or simulation_bulk.  
  "isRecurringUser": false,
  "productCalibrationStatus": false, // Deactivates product calibration mode for the simulation.
  "interactionSetups":[
      {
         "interaction_setup_id":2,
         "interaction_mode":"diagnostics"
      },
      {
         "interaction_setup_id":3,
         "interaction_mode":"diagnostics"
      },
      {
         "interaction_setup_id":2,
         "interaction_mode":"action"
      },
      {
         "interaction_setup_id":3,
         "interaction_mode":"action"
      },
      {
         "interaction_setup_id":27,
         "interaction_mode":"action"
      },
      {
         "interaction_setup_id":2,
         "interaction_mode":"user_calibration"
      },
      {
         "interaction_setup_id":3,
         "interaction_mode":"user_calibration"
      },
      {
         "interaction_setup_id":27,
         "interaction_mode":"user_calibration"
      }
   ]
}

Response sample

{
    "result": {
        "duplication_Summary": {
            "feedingDataMapping": {
                "127543": 127553,
                "127544": 127554,
                "127545": 127555,
                "127546": 127556,
                "127547": 127557
            },
            "contentAttributesMapping": {},
            "interactionAttributesMapping": {},
            "destinationEntity_Mapping": {
                "14866": 14869
            },
            "interactionChannelsMapping": {
                "2480": 2488,
                "2481": 2489,
                "2482": 2490,
                "2483": 2491
            },
            "interactionChannelsInputCategoryMapping": {
                "472": 474
            },
            "logicsMapping": {
                "48165": 48212
            },
            "logicActionsMapping": {
                "22903": 22940,
                "22904": 22941
            },
            "logicConditionsMapping": {
                "53036": 53085,
                "53037": 53086
            },
            "triggeredActionAttributesMapping": {
                "530": 534,
                "531": 535
            },
            "destinationEntityEnvironmentObjects_Mapping": {
                "663": 665
            }
        },
        "projectName": "Simulation-SIM",
        "duplicated_Project_Id": 742,
        "duplicated_Project_Name": null,
        "project_Domain_Id": 4,
        "project_Function_Id": 12,
        "project_Domain_Name": null,
        "project_Function_Name": null,
        "productCalibrationStatus": false,
        "isRecurringUser": false,
        "project_Purpose": "Simulation",
        "relatesTo_Project_Id": 742,
        "project_Status": "draft",
        "id": 745
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Parameterizing the simulation project

Project component
How to parameterize it
Related detailed information and examples

Data inputs

For the simulation, the data recipients are Virtual Humans (VHs) instead of real human users.

No physiological data inputs are required for the simulation phase, as the physiological responses of the VHs are managed by Affective Computing (powered by Virtue) during simulation. You can configure the same user motion data inputs as you did for the RDC project as VHs will also provide responses indicating their preferences during the simulated experience in the same manner.

Entities

Environment: Whether or not you need to create an environment for simulation depends on how your solution is designed. If you want to use a different environment for the simulation phase of the solution, you can create it for the simulation project. Virtual Humans: For the simulation phase, you must declare as many VHs as the number of end-users participating in the experience as the VHs are clones of the real human users.

Objects

You must declare additional environmental objects based on the requirements of your solution. Note: If you have cloned the RDC project, the objects declared for the RDC are also applicable in the simulation.

Actions and attributes

For all new objects created for simulation, you must create actions that correspond to design variations and also group them under appropriate (new) attributes. Similarly, if you want to ask any additional questions to the VHs participating in the simulated experience (other than the ones already declared for the RDC project, assuming that you have cloned that project to create the simulation), you must create the corresponding actions and attributes.

Interaction channels

Similar to actions and attributes, you must also create the corresponding interaction channels.

Logics

Similar to actions, attributes, and interaction channels, you must create any logics you require for the simulation phase of the solution. Note: You do not require diagnostics logics for the simulation.

Step 5: Generate data for simulation

To generate the required data to run the simulation, you must do the following:

Task
Description

Generate raw data

Completing this task is the same as running the experience using the RDC project.

When the InitializeSession, Interact, and end_interaction endpoints of the Interaction API are called for each interaction between your solution and end-users, the data, actions, and stages (successful interactions) are produced.

Retrieving stages

The Interaction API can return the list of stages with their interaction types, such as voice, text, QR, or external signal.

It includes parameters such as interact_value , beats_Per_Minute (heart rate), and more, that are required for the Simulation.


To learn more about the APIs, see:

Augment raw data for simulation

Data augmentation is the process of creating variations of the data collected during RDC for the purpose of simulation. You can use the endpoint to begin data augmentation for the (cloned) simulation. To learn more about the API, see The following example illustrates how to begin data augmentation for bulk simulation using the Interaction/StartDataAugumentation endpoint:

API

POST{{Server_URL}}/api/services/app/v2/Interaction/StartDataAugmentation

To learn more about the API, see Begin data augmentation for clone simulations.

Request sample

{
  "session_ID": 31491, // Interaction session ID (stage) from the data collection to be augmented
  "simulation_Project_ID": 146 // The project ID of the data collection project
}

Response sample

{
    "result": {
        "augmentation_Patch_ID": 31491,
    },
    "targetUrl": null,
    "success": true,
    "error": null,
    "unAuthorizedRequest": false,
    "__abp": true
}

Step 6: Run the simulation

Unlike RDC, in which a few real human users perform the interactions to set up a baseline, in the simulation, the users are Virtual Humans (VHs), who are EDAA™-powered emotionally-driven NPCs cloned from the profiles of the real users.

When running a simulation, you must do the following:

Summon the VHs

At the beginning of running the simulation, you must summon your VH entities (bring them to life) so that they can start interacting with the solution. You can do this using the SummonNPC endpoint in the Interaction API.

The API creates a virtual twin of the specified real human user who was involved in the raw data collection process.

Example: Summoning a VH

The following example illustrates how to summon a VH cloned from the psychological profile of an existing user:

API

POST{{Server_URL}}/api/v2/Interaction/SummonNPC

To learn more about the API, see Summon a VH.

Request sample

{
  "entity_ID": 342543, // Entity ID of the VH cloned from a real end-user
  "user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile must power the VH
  "projectId": 34245
}

Response sample

{
  "id": 123, // VH ID of the summoned VH
  "projectId": 34245,
  "user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile powers the VH
  "entity_ID": 342543,
  "active": true,
  "lastActiveTime": "2024-11-23T15:24:54.465Z"
  "creationTime": "2024-11-23T15:24:54.465Z"
  "endTime": null
}

Validate availability of VHs

VHs can have the three possible statuses; Offline , Preparing , Online.

Your summoned VHs are only ready for use in interactions when they are in the Online status. Otherwise, if you try to perform an interaction using them, an error will occur as their instance isn't online yet.

You can use the SummonedEntityStatus endpoint of the Interaction API to view the availability statuses of summoned VHs. To learn more about the API, see View the status of all summoned VHs.

Simulate interactions

Running a simulation simply means running a script in Unreal Engine that sequentially executes the different interactions that, in sum, make up the simulated experience. Do the following:

  1. List and loop all stages retrieved when generating data for the simulation (during Step 5: Generate data for simulation).

  2. For each stage, replace the parameters with those from the simulation and initialize, run, and end the interactions using the Interaction API. (For more information, see Running interactions.) This step represents actually running the simulation and enabling the VHs to interact with the objects. Note: The Interaction API sends the initial state of all objects to Affective Computing (powered by Virtue) by providing the object ID and the triggered action ID.

Best practice

Terminate the VHs

After completing the simulation (looping through all the experience data), you must terminate the VHs.

You can do this using the KillNPC endpoint in the Interaction API.

Example: Terminating a VH

The following example illustrates how to terminate a VH:

API

POST{{Server_URL}}/api/v2/Interaction/KillNPC

To learn more about the API, see Terminate the VHs.

Request sample

{
  "npc_ID": 123, // Entity ID of summoned VH
}

Response sample

{
  "id": 123, // VH ID of the summoned VH
  "projectId": 34245,
  "user_ID": "john_doe", // User ID of the real end-user whose cloned psychological profile powers the VH
  "entity_ID": 342543,
  "active": false,
  "lastActiveTime": "2024-11-28T12:32:57.465Z"
  "creationTime": "2024-11-23T15:24:54.465Z"
  "endTime": "2024-11-28T12:32:57.465Z"
}

Step 7: Analyze the results

You can analyze the level of engagement in each session to validate your solution's performance. You can do this using the Reporting/GetEngagementBySession API. See the following example:

API

GET{{Server_URL}}/api/services/app/Reporting/GetEngagementBySession To learn more about the API, see Viewing insights from project data.

Request sample

{
    "sessionId": 238944,
    "projectID": {{projectID}}, // The project ID of the project
    "foreign_identity": "User's response to boho design suite",
    "language": "en",
    "client_ip": "10.1.192.128"
}

Response sample

{
    "result": [
        {
            "sessionId": 238944,
            "engagement": 0.0,
            "stage": 0,
            "action": "",
            "entityId": 16812,
            "object_status": []
        },
        {
            "sessionId": 238944,
            "engagement": 50.0,
            "stage": 1,
            "action": "Do you like this design?",
            "entityId": 16812,
            "object_status": []
        },
        {
            "sessionId": 238944,
            "engagement": 1.0,
            "stage": 4,
            "action": "Would you like to replace the light fixture with a rattan chandelier",
            "entityId": 16812,
            "object_status": []
        },
        {
            "sessionId": 238944,
            "engagement": 50.0,
            "stage": 5,
            "action": "Are you fond of the beach cafe design aesthetic.",
            "entityId": 16812,
            "object_status": []
        },
        {
            "sessionId": 238944,
            "engagement": 50.0,
            "stage": 6,
            "action": "Let me show you how this would look in wood.",
            "entityId": 16812,
            "object_status": []
        }
    ],
    "targetUrl": null,
    "success": true,

Additionally, as the solution is built in Unreal Engine, you can define where and how the simulation results need to be consumed. You can even use an external analytics or reporting tool to visualize and consume the results.

Last updated

Was this helpful?