Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

MEx Data Set
Download: Data Folder, Data Set Description

Abstract: The MEx Multi-modal Exercise dataset contains data of 7 different physiotherapy exercises, performed by 30 subjects recorded with 2 accelerometers, a pressure mat and a depth camera.

Data Set Characteristics:  

Time-Series

Number of Instances:

6262

Area:

Computer

Attribute Characteristics:

Real

Number of Attributes:

710

Date Donated

2019-09-20

Associated Tasks:

Classification, Clustering

Missing Values?

N/A

Number of Web Hits:

2063


Source:

Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper
Robert Gordon University
Aberdeen, UK


Data Set Information:

The MEx Multi-modal Exercise dataset contains data of 7 different physiotherapy
exercises, performed by 30 subjects recorded four sensor modalities.
**Application**
The dataset can be used for exercise recognition, exercise quality assessment and
exercise counting, by developing algorithms for pre-processing, feature extraction,
multi-modal sensor fusion, segmentation and classification.

** Data collection method **
Each subject was given a sheet of 7 exercises with instructions to perform the
exercise at the beginning of the session. At the beginning of each exercise the
researcher demonstrated the exercise to the subject, then the subject performed the
exercise for maximum 60 seconds while being recorded with four sensors. During
the recording, the researcher did not give any advice or kept count or time to enforce
a rhythm.
** Sensors**
Obbrec Astra Depth Camera
- sampling frequency - 15Hz 
- frame size - 240x320
Sensing Tex Pressure Mat
- sampling frequency - 15Hz
- frame size - 32*16
Axivity AX3 3-Axis Logging Accelerometer
- sampling frequency - 100Hz
- range - 8g

** Sensor Placement**
All the exercises were performed lying down on the mat while the subject wearing
two accelerometers on the wrist and the thigh. The depth camera was placed above
the subject facing down-words recording an aerial view. Top of the depth camera
frame was aligned with the top of the pressure mat frame and the subject’s
shoulders such that the face will not be included in the depth camera video.
** Data folder **
MEx folder has four folders, one for each sensor. Inside each sensor folder,
30 folders can be found, one for each subject. In each subject folder, 8 files can be
found for each exercise with 2 files for exercise 4 as it is performed on two sides.
(The user 22 will only have 7 files as they performed the exercise 4 on only one
side.) One line in the data files correspond to one timestamped and sensory data.


Attribute Information:

The 4 columns in the act and acw files is organized as follows:
1 - timestamp
2 - x value
3 - y value
4 - z value
Min value = -8
Max value = +8
The 513 columns in the pm file is organized as follows:
1 - timestamp
2-513 pressure mat data frame (32x16)
Min value - 0
Max value - 1
The 193 columns in the dc file is organized as follows:
1 - timestamp
2-193 depth camera data frame (12x16)
dc data frame is scaled down from 240x320 to 12x16 using the OpenCV resize
algorithm
Min value - 0
Max value - 1


Relevant Papers:

Wijekoon, Anjana, Nirmalie Wiratunga, and Kay Cooper. 'MEx: Multi-modal Exercises Dataset for Human Activity Recognition.' arXiv preprint [Web Link] (2019).



Citation Request:

@article{wijekoon2019mex,
title={MEx: Multi-modal Exercises Dataset for Human Activity Recognition},
author={Wijekoon, Anjana and Wiratunga, Nirmalie and Cooper, Kay},
journal={arXiv preprint [Web Link]},
year={2019}
}


Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML