Logic blueprinting

Learn how to use logic blueprints to customize the behavior of Affective Computing (powered by Virtue) in your solutions.

Logics allow Affective Computing (powered by Virtue) to deliver personalization according to the context and conditions of the solution experience.

You can define the logics for your projects either by using the no-code blueprinting tool or the low-code integration with our Orchestra APIs.

The latter approach can be used, for example, if the logics are defined in a third-party software (rather than using Affective Computing directly) to "translate" them to the language and standards of EDAA™.

Logics and actions can also be anchored to prevent Affective Computing's generational capabilities from evolving them.

Advantage

The blueprinting tool simplifies the logic creation process and reduces your efforts when customizing Affective Computing’s behavior (through your projects' parametrization) to deliver personalization tailored to the solution at precise occasions. It enables you to have complete control over your solution experience's design.

Use case

You can use the blueprinting tool to customize how your Affective Computing-powered solution would greet an end-user who has a self-centered profile (or any other specific psychological profile type).

How it works

The structure of a logic includes the following components:

  1. Activator: The subject of the logic, i.e. for whom this logic is activated. You can activate the logic for the following subjects:

    • Profile: Psychological profiles. This means that the logic is activated for all users having the selected profile.

    • Entity: Entity ID of a VH. This means that the logic is activated only for a specific VH.

  2. Condition: The conditions for activating the logic. The following conditions are supported:

    1. Environmental

      • Location

      • Speed

      • Time

      • External signal

      • Camera ID

      • Crowd counting

    2. User motion

      • User detected

    3. User geotargeting

      • Location

      • QR

      • Camera

      • Beacon

  3. Action: The deliverable action that belongs to a Content, Interaction, or a Triggered Action attribute or Specific Action (in cases when the access to the specific action is required without attribution).

Logical operators (AND and OR) can also be used when defining an EDAA™ logic. These operators define the flow of the logic and enable including multiple conditions and rules of interdependence between conditions and actions.

Logics can be of the following types:

  1. User calibration: A specific logic that should be activated during the user calibration mode only.

  2. Action: The general type of logic that should always result in action delivery.

You can anchor a logic to prevent EDAA™ from generating new logics based on it. Instead, it is always used as-is.

You can also anchor actions to prevent EDAA™ from changing it and generating additional actions inside the respective attribute.

For detailed information about creating and configuring logics, see Logics.

Last updated

Was this helpful?