1. User Identification From Walking Activity: The dataset collects data from an Android smartphone positioned in the chest pocket from 22 participants walking in the wild over a predefined path.
2. Activity Recognition from Single Chest-Mounted Accelerometer: The dataset collects data from a wearable accelerometer mounted on the chest. The dataset is intended for Activity Recognition research purposes.
3. Synthetic Control Chart Time Series: This data consists of synthetically generated control charts.
4. Japanese Vowels: This dataset records 640 time series of 12 LPC cepstrum coefficients taken from nine male speakers.
5. Australian Sign Language signs (High Quality): This data consists of sample of Auslan (Australian Sign Language) signs. 27 examples of each of 95 Auslan signs were captured from a native signer using high-quality position trackers
6. Spoken Arabic Digit: This dataset contains timeseries of mel-frequency cepstrum coefficients (MFCCs) corresponding to spoken Arabic digits. Includes data from 44 male and 44 female native Arabic speakers.
7. Gesture Phase Segmentation: The dataset is composed by features extracted from 7 videos with people gesticulating, aiming at studying Gesture Phase Segmentation. It contains 50 attributes divided into two files for each video.
8. Crop mapping using fused optical-radar data set: Combining optical and PolSAR remote sensing images offers a complementary data set with a significant number of temporal, spectral, textural, and polarimetric features for cropland classification.
9. Human Activity Recognition from Continuous Ambient Sensor Data: This dataset represents ambient data collected in homes with volunteer residents. Data are collected continuously while residents perform their normal routines.