About our framework

Learn about Orchestra and EDAA™, the core components of Affective Computing by Virtue's innovation.

Orchestra - The foundation for seamless integration and innovation

What is Orchestra?

Orchestra is Virtue's platform-as-a-service (PaaS) that serves as the foundation for seamless integration with Affective Computing by Virtue.

This infrastructure lets you seamlessly interact with EDAA™ (the technology that powers our framework) by plugging and managing connections with your solution.

Orchestra provides streamlined access to Affective Computing by Virtue through two key channels:

The Portal

✔️ GUI-based SaaS platform that requires no coding skills

✔️ Enables building and managing solutions that leverage EDAA™ (our technology)

✔️ Includes data visualization tools for monitoring performance and actionable insights

Public APIs

✔️ A low-code approach for building tailored and scalable solutions

✔️ Direct access to EDAA™ for developers

✔️ Enables integrating and customizing solutions through API calls

Why use Orchestra?

Orchestra simplifies access to Affective Computing by Virtue's framework. It eliminates the need for direct interactions and ensures that all processes are seamlessly routed through its infrastructure.

It is designed with flexibility and supports both no-code and low-code approaches to solution configuration and customization. Therefore, it caters to diverse technical needs and expertise levels.

Additionally, Orchestra offers scalability and enables organizations to develop and deploy solutions that evolve alongside their growth and operational requirements.

For more information

How does our platform work?

EDAA™ - The heart of Virtue's Affective Computing innovation

Intelligence is never complete without emotions, and AI is no exception.

What is EDAA™?

EDAA™ (Emotional Data Analysis and Automation) is our scientific, foundational, and multimodal deep learning AI model that bridges the gap between data and human emotions. It leverages insights from research in the fields of psychology, affective computing, and deep learning.

EDAA™ integrates emotional intelligence with traditional data-based AI models to combine data-driven insights and human-centric interpretation, thereby enabling you to build ethical and unbiased AI-powered solutions.

Through behavioral and environmental understanding, EDAA™ can interpret psychological cues and ensure individualized, real-time adaptation that enables meaningful, empathetic human-solution interactions.

Why use EDAA™?

  • Emotional intelligence: EDAA™ transcends data clusters to analyze psychological patterns and individual motivations.

  • Bias-free decision making: EDAA™ employs auto-reinforcement and metacognition engines to deliver unbiased actions.

  • Security and privacy: EDAA™ uses military-grade cybersecurity and anonymization techniques.

What makes EDAA™ unique?

EDAA™ is not just a technology; it’s a paradigm shift in AI solutions, enabling organizations to build empathetic, personalized, and ethical solutions. From enhancing user experiences to addressing complex emotional needs, EDAA™ sets a new standard for human-powered AI through:

  • Theory of Mind integration: EDAA™ leverages the Theory of Mind (ToM) to understand unique human emotions and intentions, enabling empathetic, individualized responses.

  • Depth psychological profiling: EDAA™ analyzes over 200 markers and 24 psychological patterns, to achieve precise, personalized user profiling based on motivation.

  • No brute-force learning: EDAA™ uses auto-reinforcement to train itself through real-time impacts, avoiding the limitations of dependency on massive data sets.

  • Multimodal emotional analysis: By contextualizing data from emotional, behavioral, and environmental cues, EDAA™ accurately differentiates emotions within the same signal.

  • Cutting-edge metacognition engines: EDAA™ integrates six engines and twelve models to process data dynamically and deliver bias-free responses in real time.

  • Privacy-first design: With military-grade cybersecurity and anonymization protocols, EDAA™ ensures compliance with GDPR and safeguards user data.

Features of EDAA™

Feature
Description

Metacognition engine

Affective Computing is powered by the following engines of EDAA™:

Solution-agnostic metrics to measure solution success:

  • Engagement: Level of the user motivation reached

  • Efficiency: Efficiency of a specific logic strategy in delivering personalization

  • Micro-moment: The most appropriate moment for action delivery

Also see Feedback system.

Data management

  • Data validation: Ability to ensure data accuracy

  • Data normalization: Ability to ensure value consistency

  • Data vectorization: Ability to convert data into numbers

EDAA™ can intelligently generate contextually-relevant actions and logics for projects.

Digital twin simulation with Virtual Humans (VHs) to recreate experiences through:

  • Exact scenario simulation using clones

  • Bulk simulation

Emotionally-driven powered by EDAA™ and having psychological profiles based on those of real humans participate in the simulated experience and replicate authentic human psychological responses.

Motivation engine

EDAA™'s Motivation engine can discern and align your solution with the motivations of users within the project environment. It accommodates both imposed (preset) and free (user-determined) motivations.

The Motivation engine enables EDAA™ to understand your users' current state of mind, detect anomalies and deviations from the expected behavior, and ensure that users are in their optimal state of mind.

This engine helps you build human-centric solutions by enabling EDAA™ to recognize and preserve user motivation.

Advantage

The Motivation engine provides you with the following features:

Features
Description

Ability to determine and manage user motivation during free and imposed motivation

Ability to detect mismatches in user behavior with respect to the expected motivation

Sensations engine

EDAA™'s Sensations engine processes different inputs and calculates each user's state of mind.

It normalizes input variables and manages operations such as:

  • Speech-to-text conversion

  • Translation

  • Error generation

  • Project setup validation

  • Profile and psychological state calculations.

It analyzes inputs, such as audio, BPM, GSR, and video feeds, from sensors, and accurately assesses users' states of mind.

The Sensations engine contains the following components that process internal and external perception:

Module
Description

Internal perception

The Internal perception component analyzes internal stimuli, such as physiological responses, and constructs a comprehensive psychological profile of each user.

This feature evaluates involuntary physical reactions and maps them with psychological states, which contributes to a deep understanding of each user's mental and emotional status.

Advantage

The Internal perception component provides you with profound insights about the subconscious aspects of user behavior. This enhances the personalization and effectiveness of interactions.

Use cases

  • In solutions that aim to enhance human well-being, you can align user experiences with the physiological states of your end-users for more effective health interventions.

  • In customer service solutions, you can improve engagement strategies and make interactions more empathetic and user-centric.

Features

Feature
Description

Ability to categorize mood states and emotional responses to interpret the emotional context of user interactions, which, in turn, facilitates responding appropriately to different emotional states

Ability to establish users' psychological profiles

Ability to establish user personas through diagnostic questions

Ability to validate and adjust previously-established psychological profiles of users by considering their current psychological states

Ability to ensure that your solution's end goal and experience's ideal outcome are aligned by understanding contextual reality

Ability to establish users' preliminary psychological profiles by posing diagnostic questions at the following levels:

  • Parent

  • Domain

  • Tenant/project

External perception

The External perception component processes raw inputs about the environment, physiological aspects, and user motion. This includes data such as location, camera inputs, audio, heart rate, EEG, and behavioral indicators like thinking time.

By integrating these external perceptions with internal ones, EDAA™ comprehensively understands and predicts user states.

Advantage

EDAA™ ability to process external perceptions enhances its accuracy in assessing user states, which increases the personalization and efficiency in user interactions.

Use cases

  • You can use environmental data for context-aware user experience designs, such as adapting services to the time of day or current weather.

  • You can process physiological and motion data in health and wellness applications to gain and utilize insights about user well-being.

Features

Feature
Description

Ability to support the following inputs:

  • Speech vectors

  • Heart rate

  • EEG

Ability to support the following inputs:

  • Location (X, Y)

  • Speed

  • Crowd counting

  • External signal

Ability to support the following inputs:

  • Time to answer

  • Meaning

  • QR scan

  • Facial recognition

Strategy engine

EDAA™'s Strategy engine integrates the analysis of multiple components and features to make informed decisions based on (the Sensations engine's) depth profiling and analysis of user states.

This engine predicts and delivers appropriate actions for users in specific situations and continuously improves content and interaction selection.

Advantage

The Strategy engine provides you with the following features:

Functionality
Description

Ability to personalize actions, content delivery, and triggered actions for end-users

Ability to predict and automate effective actions and interactions for end-users

Ability to do the following:

  • Create, verify, and anchor actions

  • Auto-feed actions through connections or by uploading a document

Ability to generate and select features through attributes

Ability to define the following interaction channels:

  • Input channels

  • Interaction channels

  • Dual channels (both input and output)

Ability to customize personalization delivery by configuring the following components using a no-code blueprinting tool:

  • Activators

  • Conditions

  • Actions

Ability to customize personalization delivery by configuring the following components using our low-code APIs:

  • Activators

  • Conditions

  • Actions

Last updated

Was this helpful?