Understanding data inputs

Learn about the primary groups of user analysis variables and how to set up data inputs for them.

Affective Computing uses the following types of data for user analysis:

Type
Description
Input variables

Physiological data

Involuntary responses of the human body that cannot be manipulated.

You can use external devices or sensors that meet the following criteria to detect and collect this data:

  • Constant, real-time data reading

  • Bluetooth & ANT+ compatibility

  • BLE (Bluetooth Low Energy) support

  • SDK availability

  • Accessibility of data via Python

  • Heart rate

  • Speech

  • Galvanic Skin Response (GSR)

  • Brain (EEG)

  • Blood pressure

  • Breath

For detailed information, see Physiological input variables.

User motion data

Data related to physical movements, reactions, and behavior. This data reflects user behavior in specific situations. For example, this could include speech (how questions were answered), joystick movement, and more.

  • Speech analysis variables (Time to answer, meaning, number of words, duration of answer, no answer, and pauses between words)

  • Interaction with a front end

  • QR activation

  • Domain-specific technical variables

  • Facial recognition

For detailed information, see User motion input variables.

Environmental data

The stimuli that surrounds the user, such as location, time, weather, and more. This data provides the environmental context during user-solution interactions.

  • Date

  • Time

  • Location

  • Speed

  • Facial recognition

  • Crowd counting

  • Weather

  • Pollution

  • Signals (technical variables for the Safety domain) such as traffic signals

  • Other domain-specific technical variables

For detailed information, see Environmental input variables.

Physiological input variables

The following table describes how to configure data inputs for each physiological data variable:

Variable
Description
Source*
Combined sources?
Device
Type of data
Data

Heart rate

The heart rate (beats per minute) during the stage

Front end

Wearable heart rate sensor with BLE

Structured; Beats per minute (BPM)

Values from 50 to 200

Speech

The frequency of the user's voice during the stage

Front end

Edge’s microphone settings

Structured; Hertz (Hz)

Audio files

Galvanic Skin Response (GSR)

Any sweating during the stage

Front end

Wearable GSR sensor with BLE

Structured; Hertz (Hz)

Values from 0 to 50

Brain (EEG)

The electrical activity of the brain during the stage.

Affective Computing supports the following 5 EEG channels in the following order and format: { "AF3", "T7", "Pz", "T8", "AF4" } Each channel is a key-value pair.

Front end

Wearable EGG sensor with BLE

Structured; Hertz (Hz)

Values from 4 to 200

Blood Pressure

Indicates the user's efforts during intense physical activity during the stage.

Front end

Wearable SpO2 sensor with BLE

Breath

Indicates the user's efforts during intense physical activity during the stage.

Front end

Wearable SpO2 sensor with BLE

*Source: Indicates from where the data can be obtained. It could be obtained from the end-user facing front end (for example, an app's GUI) or from Affective Computing's analysis itself.

*Combined sources?: Indicates whether this data input needs to be combined with other sources or is sufficient by itself

User motion input variables

The following table describes how to configure data inputs for each user motion data variable:

Variable
Description
Source*
Device
Type of data
Data
Mandatory?*

Time to answer

How long the users take to respond to a question.

Affective Computing

-

Structured; Seconds

Values from 0 to infinity

Yes, if Speech is selected as a physiological input variable.

Meaning

The literal meaning of the words spoken.

Affective Computing

-

Unstructured; Text

Audio

Yes, if Speech is selected as a physiological input variable.

Number of words

The number of words in the user's answer.

Affective Computing

-

Structured; Numbers

Values from 0 to infinity

Yes, if Speech is selected as a physiological input variable.

Duration of the answer

Time taken to answer.

Affective Computing

-

Structured; Seconds

Values from 0 to infinity

Yes, if Speech is selected as a physiological input variable.

No answer

Affective Computing

-

Structured; Audio

-

Yes, if Speech is selected as a physiological input variable.

Pauses between words

Number of seconds between words in the user's answer.

Affective Computing

-

Structured; Seconds

Values from 0 to infinity

Yes, if Speech is selected as a physiological input variable.

Interaction with a front end or UI

This variable depends entirely on how the solution is designed and could include multiple options.

Front end

The app or device that provides the UI or front end used for the interaction

Triggered action

Depends on the solution's requirements.

No

QR activation

The user triggers predefined logics by scanning a QR code.

Front end

Camera used for scanning the code

Triggered action

No

Other domain-specific technical data

This variable depends on the solution's domain and could include multiple parameters.

Front end

The app or device that provides the UI or front end used for the interaction

Triggered action

Depends on the solution's requirements.

No

Facial recognition

User identification to trigger predefined logics or user facial data processing to identify expressions

Front end

Camera used for facial recognition

Unstructured; Video processing

Video

No

*Source: Indicates from where the data can be obtained. It could be from the end-user facing front end (for example, an app's GUI) or from Affective Computing's analysis itself.

*Mandatory?: Indicates whether this data input is required for all projects.

Environmental input variables

The following table describes how to configure data inputs for each environmental data variable:

Variable
Description
Source
Device
Type of data
Data
Mandatory?

Date

Date

Front end

The app or device that provides the UI or front end used for the interaction

Structured; MM/DD/YYYY

Yes

Time

Time of the day and time zone

Front end

The app or device that provides the UI or front end used for the interaction

Structured; TIME-ZONE_HH:MM:SS

Values from 00:00:00 to 23:59:59

Yes

Location

User location GPS coordinates

Front end

GPS sensor

Structured; Lat,Long

No

Speed

Movement speed

Front end

GPS sensor

Structured; km/h

Values from 0 to 400

No

Facial recognition

User identification to trigger predefined logics

Front end

IP camera

Unstructured; video processing

No

Crowd counting

Random face detection and counting

Data provided by your on-premises server

IP camera

Unstructured; video processing

No

Weather

Weather information based on user location

External data feed

External data feed (based on user's GPS location)

Structured

Depends on the external data feed

No

Pollution

Level of contaminants in the air based on user location

External data feed

External data feed (based on user's GPS location)

Structured

Depends on the external data feed

No

(Technical data for safety) Signals

External signs such as traffic signals.

Front end

Camera

Unstructured; image processing

No

Other domain-specific technical data

Depends on the solution's domain and could include multiple parameters.

Front end

The app or device that provides the UI or front end used for the interaction

Triggered action

Depends on the solution's requirements.

No

*Source: Indicates where the data can be obtained. It could be from an end-user facing front end (for example, an app's GUI) or from Affective Computing's analysis itself.

*Mandatory?: Indicates whether this data input is required for all projects

The following table lists recommended devices you can use for collecting different types of data:

Input
Type of Data
Hardware
Type
Suggested brand

Heart Rate (HR)

Structured data; BPM

Sensors

Arm sensors

Chest sensors

Finger sensor

Smartbands

Wristband

HR & Skin (GSR)

Structured data; BPM and Hz

Sensors

Wristband

-

Armband

Speech

Structured; text

Microphone

Any microphone device

-

EEG

Non-structured data; Hz

-

Headset

Headphones

Plugging in an external or synthetic data set

We strongly recommend providing only data collected from real humans (by making them participate in your solution experiences) as input for your human-centered solution.

This is because diagnosing and analyzing real humans provides Affective Computing with comprehensive context of the solution experience, such as limitations, boundaries, and genuine human reactions to events, and helps you deliver realistic personalization to your users.

For example, consider a solution that simulates and validates a thrill ride in a theme park. Unless the required input data (speech and heart rate) is collected from real users based on their participation in this specific experience, the inputs wouldn't be relevant and cannot help Affective Computing understand the context of the experience comprehensively. (If your solution aims at pre-validation, the experience can be run in virtual reality with real human users for data collection.)

Nonetheless, if required (for example, for fast and low-level validation where accurate personalization is not a key factor), when setting up your solution, you can plug in an external data set or generate and use a synthetic data set.

You would need to do this manually; you must normalize, transform (to the format required by your solution), and feed the data into your solution using the APIs (using a script or worker) as that of different users.

How to define data inputs for your project

You must declare (define) data inputs in your project for each type of (raw or synthetic) external data that you want to utilize in your solution. You can define and manage data inputs for your projects using the Portal or our public APIs.

Working with data inputs using the Portal

Defining data inputs

Working with data inputs using APIs

Create a projectViewing available data inputs (interaction setups)

Last updated

Was this helpful?