Diagnostics and calibration

Learn how Affective Computing (powered by Virtue) establishes users' psychological profiles and calibrates solutions based on both motivation and end-users' psychological states.

EDAA™, the underlying technology that powers Affective Computing, embodies the principles of the Theory of Mind (ToM). Therefore, it is capable of identifying and analyzing its users' psychological profiles, detecting variations in their emotional states, and calibrating the solution to provide personalization with a high degree of human-centricity.

EDAA™ understands its users' psychological profiles through the internal processes of diagnostics and calibration;

  • Diagnostics is the process of analyzing and establishing a preliminary (baseline) psychological profile for each user.

  • Calibration is the process that helps EDAA™ understand both the solution’s context and the current psychological state (mood) of its users. Calibration happens at 2 levels; product and user.

Advantages

Diagnostics and calibration enable you to build solutions that provide automated real-time personalization that is accurately tailored to users' psychological profiles and current emotional states.

Through these processes, Affective Computing (powered by Virtue) overcomes the common limitations of typical machine learning tools; EDAA™ doesn't require large volumes of training data, long periods of training, or manual resources for training and model management. Moreover, as EDAA™ analyzes motivations and emotional states of users in real-time rather than training data, it also avoids incurring biases inherent in these data sets.

As a result, your solutions can provide accurate personalization and more humanized interactions, leading to improving user engagement and increasing its effectiveness. Your solutions can provide relevant and adaptive experiences to users.

Use cases

  • Validation: You can build solutions to virtually validate a product, service, or experience before actually moving forward with production. The validation would take place in a simulated environment and use Affective Computing-powered Virtual Humans (emotionally-driven non-playable characters) based on cloned or augmented emotional data of real human users. Affective Computing's diagnostics and calibration capabilities can accurately detect variations in the VHs' emotional states and the help align the solution with product and users' motivations.

  • Personalized user experiences: Affective Computing-powered VHs can customize their interactions with real human users based on the users' current moods.

  • Human safety: You can leverage Affective Computing (powered by Virtue)'s capabilities to analyze users' psychological profiles and moods to predict and detect anomalistic behavior that would endanger humans. You can build solutions that could deliver preventative actions, such as mood enhancement.

How it works

Diagnostics

Diagnostics is the process of analyzing and establishing a preliminary (baseline) psychological profile for each user. This process takes place when a new user interacts with an Affective Computing-powered solution for the first time.

In order to provide accurate personalization, before interacting with the solution, users must be "diagnosed" by EDAA™, the underlying technology that powers Affective Computing. Diagnostics is the first step towards the diagnosis, as it enables EDAA™ to determine the users' preliminary psychological profile.

User diagnosis is not entirely determined by the diagnostics process; diagnostics only enables EDAA™ to establish each user's initial profile, which serves as the baseline. In subsequent interactions, the process of user calibration enables EDAA™ to update each user's profile, if required, based on the user's current psychological state.

During diagnostics, EDAA™ provides each new user with a concrete scenario (called the diagnostic interaction) in which it poses a set of questions (typically 5-7) to users.

The questions aim at triggering unconscious emotional responses from users and activating their inefficient behaviors, which in turn give EDAA™ enough information to determine and generate their initial psychological profile.

When users participate in the diagnostics process, EDAA™ analyzes the cognitive processes revealed by their physiological and behavioral responses.

For some domains (for example, safety and security), a fixed set of questions is posed. Our Ethical AI team designs and curates these questions to trigger the required emotional responses when users answer them.

However, for better personalization, diagnostics can include additional questions that are based on the solution's context. You can consult our Ethical AI team to understand how to design diagnostics questions for personalization.

When building your solution, you can also define questions to identify user personas. These questions do not impact the psychological profiles determined for the users but help to improve personalization. For example, user persona questions can help clients distinguish between technical and non-technical users.

In projects, diagnostics is set up through logics (For detailed information, see Understanding project components). When the diagnostic logic is invoked, a relevant question is delivered to the user. (You can group related questions under the same attribute when parameterizing projects so that any of them can be delivered when the respective logic is triggered.)

You can use separate attributes to group actions relevant to different personas so that you can set up logics to take personas into consideration during action delivery.

If EDAA™ doesn't understand a user's answer to a diagnostics question, a different related question is asked. After 5 unsuccessful attempts, the diagnostics process is stopped with an error.

On the other hand, if a user doesn’t understand a diagnostic question and asks for it to be repeated, EDAA™ repeats the question.

Product calibration

Product calibration is the operation mode that enables EDAA™ to establish the solution’s reality (context). In this mode, Affective Computing (powered by Virtue) validates and aligns the solution’s end goal with the ideal outcome of the experience.

Affective Computing (powered by Virtue) projects runs in this mode during initial solution deployment. You can also manually activate or deactivate this mode for your projects at any time.

During product calibration, except for personalization, all of Affective Computing's functionalities are fully available. This is because in the initial stages of running a solution, EDAA™ needs to generate motivational checkpoints first before proceeding to personalization.

You must use API calls, not the Portal, to visualize and verify the motivational checkpoints.

User calibration

User calibration is the process of validating and adjusting the previously-established psychological profile of existing users based on their current psychological states.

This process occurs at the beginning of each interaction session between the solution and existing users, and ensures that Affective Computing (powered by Virtue) considers each user’s latest emotional state when delivering personalization. This process typically takes up to 3 minutes.

However, based on the solution's requirements, you can customize its duration and even automate it. For example, you can set it up to require a minimum number of interactions.

During user calibration, EDAA™ operates in passive mode and collects inputs from the user to determine if the psychological profile it determined previously needs to be updated.

EDAA™ does this by analyzing physiological inputs, posing questions and analyzing responses, or observing user behavior. EDAA™ then generates motivational checkpoints for users that you can verify manually or through an API endpoint.

This process ensures that Affective Computing (powered by Virtue) considers each user’s latest emotional state when delivering personalization.

In solutions that leverage Affective Computing's simulation capabilities, during both the raw data collection and simulation events, EDAA™ entirely operates in user calibration mode.

For example, in a solution that simulates an elevator ride during which specific music is played in the background (to study the impact of the music on passengers' mood), the data required for the simulation is obtained in one of the following ways:

  • The data is collected during a raw data collection event, in which the experience is recreated with real human users in controlled conditions.

  • An existing data set with different types of music and its impact on humans that is plugged in to the solution

  • A synthetic data set

Last updated

Was this helpful?