Grammatical Facial Expressions
Donated on 10/5/2014
This dataset supports the development of models that make possible to interpret Grammatical Facial Expressions from Brazilian Sign Language (Libras).
Dataset Characteristics
Multivariate, Sequential
Subject Area
Computer Science
Associated Tasks
Classification, Clustering
Feature Type
Real
# Instances
27965
# Features
-
Dataset Information
Additional Information
The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions. This dataset was already used in the experiments described in Freitas et al. (2014). The dataset is composed by eighteen videos recorded using Microsoft Kinect sensor. In each video, a user performs (five times), in front of the sensor, five sentences in Libras (Brazilian Sign Language) that require the use of a grammatical facial expression. By using Microsoft Kinect, we have obtained: (a) a image of each frame, identified by a timestamp; (b) a text file containing one hundred coordinates (x, y, z) of points from eyes, nose, eyebrows, face contour and iris; each line in the file corresponds to points extracted from one frame. The images enabled a manual labeling of each file by a specialist, providing a ground truth for classification. The dataset is organized in 36 files: 18 datapoint files and 18 target files, one pair for each video which compose the dataset.The name of the file refers to each video: the letter corresponding to the user (A and B), name of grammatical facial expression and a specification (target or datapoints).
Has Missing Values?
No
Variables Table
Variable Name | Role | Type | Description | Units | Missing Values |
---|---|---|---|---|---|
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no | |||||
no |
0 to 10 of 100
Additional Variable Information
Datapoints files: Coordinates x and y are given in pixels. Coordinates z are given in millimetres. Label of frame 0 - 7 (x,y,z) - left eye 8 - 15 (x,y,z) - right eye 16 - 25 (x,y,z) - left eyebrow 26 - 35 (x,y,z) - right eyebrow 36 - 47 (x,y,z) - nose 48 - 67 (x,y,z) - mouth 68 - 86 (x,y,z) - face contour 87 (x,y,z) - left iris 88 (x,y,z) - right iris 89 (x,y,z) - nose tip 90 - 94 (x,y,z) - line above left eyebrow 95 - 99 (x,y,z) - line above right eyebrow
Dataset Files
File | Size |
---|---|
grammatical_facial_expression/a_relative_datapoints.txt | 4.6 MB |
grammatical_facial_expression/b_conditional_datapoints.txt | 4.1 MB |
grammatical_facial_expression/a_conditional_datapoints.txt | 3.8 MB |
grammatical_facial_expression/b_relative_datapoints.txt | 3.8 MB |
grammatical_facial_expression/b_topics_datapoints.txt | 3.6 MB |
0 to 5 of 77
Reviews
There are no reviews for this dataset yet.
pip install ucimlrepo
from ucimlrepo import fetch_ucirepo # fetch dataset grammatical_facial_expressions = fetch_ucirepo(id=317) # data (as pandas dataframes) X = grammatical_facial_expressions.data.features y = grammatical_facial_expressions.data.targets # metadata print(grammatical_facial_expressions.metadata) # variable information print(grammatical_facial_expressions.variables)
Freitas, F., Barbosa, F., & Peres, S. (2014). Grammatical Facial Expressions [Dataset]. UCI Machine Learning Repository. https://doi.org/10.24432/C59S3R.
Creators
Fernando Freitas
Felipe Barbosa
Sarajane Peres
DOI
License
This dataset is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.
This allows for the sharing and adaptation of the datasets for any purpose, provided that the appropriate credit is given.