Multimodal data processing
Learn about how Affective Computing (powered by Virtue) processes different input data types.
As EDAA™, the underlying technology that powers Affective Computing, is modeled after the human brain, it requires data inputs, just as the brain requires perceptions. These data inputs are in the form of end-user physical, physiological, and motion data within the experience environment.
Affective Computing (powered by Virtue) is also equipped with multimodal real-time data streaming capabilities that allow for the processing of diverse data types, including image, video, sound, and text.
Advantage
Affective Computing (powered by Virtue)'s multimodal data processing capability enables you to create complex, highly personalized, and responsive user experiences that can adapt to various inputs by leveraging diverse data types.
Use cases
Use cases for different inputs:
You can develop complex health monitoring systems by utilizing physiological data for real-time user assessments.
In interactive gaming or VR, you can use user motion data to simulate the best player experience based on various factors.
You can use environmental data when developing smart home applications to adjust settings based on user presence and emotional condition.
Use cases for different data types:
You can build solutions for crowd control and safety in tourist areas, utilizing image and video processing to monitor visitor emotions.
Developers can create interactive platforms that respond to combined audio-visual cues, enhancing user engagement and experience.
How it works
Data inputs are selected at the beginning of working with a project. The inputs could differ based on mode; some inputs might be active during regular interactions but disabled in the diagnostics mode.
EDAA™ receives and processes the following kinds of data inputs:
Physiological
Speech vectors
Heart rate
EEG
Environmental
Location (X, Y coordinates)
Speed
Crowd counting
External signals
Behavioral
Time to answer
Meaning
QR code scan
Facial recognition
For detailed information, see Understanding data inputs
These inputs help EDAA™ understand the human user's actual situation at the time of the interaction.
Affective Computing (powered by Virtue) is informed about the user's physiological state, how they are behaving during the interaction (for example, specific patterns of behavior such as speech patterns that indicate that they are nervous or specific facial physiognomic patterns), and environmental conditions.
Last updated
Was this helpful?