1. DBWorld e-mails: It contains 64 e-mails which I have manually collected from DBWorld mailing list. They are classified in: 'announces of conferences' and 'everything else'.
2. Northix: Northix is designed to be a schema matching benchmark problem for data integration of two entity relationship databases.
3. Mturk User-Perceived Clusters over Images: This dataset was collected by Shan-Hung Wu and DataLab members at NTHU, Taiwan. There're 325 user-perceived clusters from 100 users and their corresponding descriptions.
4. CNAE-9: This is a data set containing 1080 documents of free text business descriptions of Brazilian companies categorized into a
subset of 9 categories
5. Amazon Commerce reviews set: The dataset is used for authorship identification in online Writeprint which is a new research field of pattern recognition.
6. Reuter_50_50: The dataset is used for authorship identification in online Writeprint which is a new research field of pattern recognition.
7. TTC-3600: Benchmark dataset for Turkish text categorization: The TTC-3600 data set is a collection of Turkish news and articles including categorized 3,600 documents from 6 well-known portals in Turkey. It has 4 different forms in ARFF Weka format.
8. Farm Ads: This data was collected from text ads found on twelve websites that deal with various farm animal related topics. The binary labels are based on whether or not the content owner approves of the ad.
9. NIPS Conference Papers 1987-2015: This data set contains the distribution of words in the full text of the NIPS conference papers published from 1987 to 2015.
10. DeliciousMIL: A Data Set for Multi-Label Multi-Instance Learning with Instance Labels: This dataset includes 1) 12234 documents (8251 training, 3983 test) extracted from DeliciousT140 dataset, 2) class labels for all documents, 3) labels for a subset of sentences of the test documents.
11. Health News in Twitter: The data was collected in 2015 using Twitter API. This dataset contains health news from more than 15 major health news agencies such as BBC, CNN, and NYT.
12. Victorian Era Authorship Attribution: To create the largest authorship attribution dataset, we extracted works of 50 well-known authors. To have a non-exhaustive learning, in training there are 45 authors whereas, in the testing, it's 50
13. YouTube Multiview Video Games Dataset: This dataset contains about 120k instances, each described by 13 feature types, with class information, specially useful for exploring multiview topics (cotraining, ensembles, clustering,..).
14. Bag of Words: This data set contains five text collections in the form of bags-of-words.