Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

Grammatical Facial Expressions Data Set
Download: Data Folder, Data Set Description

Abstract: This dataset supports the development of models that make possible to interpret Grammatical Facial Expressions from Brazilian Sign Language (Libras).

Data Set Characteristics:  

Multivariate, Sequential

Number of Instances:

27965

Area:

Computer

Attribute Characteristics:

Real

Number of Attributes:

100

Date Donated

2014-10-06

Associated Tasks:

Classification, Clustering

Missing Values?

N/A

Number of Web Hits:

31045


Source:

(a) Creators:
Fernando de Almeida Freitas (Freitas, F. A.)
{fernando} at incluirtecnologia.com.br
Felipe Venâncio Barbosa (Barbosa, F. V.)
Sarajane Marques Peres (Peres, S. M.)
{felipebarbosa, sarajane} at usp.br
http://each.uspnet.usp.br/sarajane/

(b) Donor:
University of São Paulo
School of Art, Sciences and Humanities
Sao Paulo, SP, Brazil
http://www5.usp.br/en/

Incluir Tecnologia LTDA ME
Itajubá, MG, Brazil
www.incluirtecnologia.com.br


Data Set Information:

The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions. This dataset was already used in the experiments described in Freitas et al. (2014).

The dataset is composed by eighteen videos recorded using Microsoft Kinect sensor. In each video, a user performs (five times), in front of the sensor, five sentences in Libras (Brazilian Sign Language) that require the use of a grammatical facial expression. By using Microsoft Kinect, we have obtained: (a) a image of each frame, identified by a timestamp; (b) a text file containing one hundred coordinates (x, y, z) of points from eyes, nose, eyebrows, face contour and iris; each line in the file corresponds to points extracted from one frame. The images enabled a manual labeling of each file by a specialist, providing a ground truth for classification.

The dataset is organized in 36 files: 18 datapoint files and 18 target files, one pair for each video which compose the dataset.The name of the file refers to each video: the letter corresponding to the user (A and B), name of grammatical facial expression and a specification (target or datapoints).


Attribute Information:

Datapoints files:

Coordinates x and y are given in pixels.
Coordinates z are given in millimetres.

Label of frame
0 - 7 (x,y,z) - left eye
8 - 15 (x,y,z) - right eye
16 - 25 (x,y,z) - left eyebrow
26 - 35 (x,y,z) - right eyebrow
36 - 47 (x,y,z) - nose
48 - 67 (x,y,z) - mouth
68 - 86 (x,y,z) - face contour
87 (x,y,z) - left iris
88 (x,y,z) - right iris
89 (x,y,z) - nose tip
90 - 94 (x,y,z) - line above left eyebrow
95 - 99 (x,y,z) - line above right eyebrow


Relevant Papers:

FREITAS, F. A. ; Peres, S. M. ; Lima, C. A. M. ; BARBOSA, F. V. . Grammatical Facial Expressions Recognition with Machine Learning. In: 27th Florida Artificial Intelligence Research Society Conference (FLAIRS), 2014, Pensacola Beach. Proceedings of the 27th Florida Artificial Intelligence Research Society Conference (FLAIRS). Palo Alto: The AAAI Press, 2014. p. 180-185.



Citation Request:

Please refer to the Machine Learning Repository's citation policy.
Additionally, the authors request a citation to the paper mentioned here as relevant paper.


Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML