1. Badges: Badges labeled with a "+" or "-" as a function of a person's name
2. Reuters-21578 Text Categorization Collection: This is a collection of documents that appeared on Reuters newswire in 1987. The documents were assembled and indexed with categories.
3. Syskill and Webert Web Page Ratings: This database contains HTML source of web pages plus the ratings of a single user on these web pages. Web pages are on four seperate subjects (Bands- recording artists; Goats; Sheep; and BioMedical)
4. Reuters Transcribed Subset: This dataset is created by reading out 200 files from the 10 largest Reuters
classes and using an Automatic Speech Recognition system to create
5. Amazon Commerce reviews set: The dataset is used for authorship identification in online Writeprint which is a new research field of pattern recognition.
6. Reuter_50_50: The dataset is used for authorship identification in online Writeprint which is a new research field of pattern recognition.
7. Farm Ads: This data was collected from text ads found on twelve websites that deal with various farm animal related topics. The binary labels are based on whether or not the content owner approves of the ad.
8. DBWorld e-mails: It contains 64 e-mails which I have manually collected from DBWorld mailing list. They are classified in: 'announces of conferences' and 'everything else'.
9. KEGG Metabolic Relation Network (Directed): KEGG Metabolic pathways modeled as directed relation network. Variety of graphical features presented.
10. KEGG Metabolic Reaction Network (Undirected): KEGG Metabolic pathways modeled as un-directed reaction network. Variety of graphical features presented.
11. YouTube Comedy Slam Preference Data: This dataset provides user vote data on which video from a pair of videos is funnier collected on YouTube Comedy Slam. The task is to automatically predict this preference based on video metadata.
12. SMS Spam Collection: The SMS Spam Collection is a public set of SMS labeled messages that have been collected for mobile phone spam research.
13. CNAE-9: This is a data set containing 1080 documents of free text business descriptions of Brazilian companies categorized into a
subset of 9 categories
14. Northix: Northix is designed to be a schema matching benchmark problem for data integration of two entity relationship databases.
15. Legal Case Reports: A textual corpus of 4000 legal cases for automatic summarization and citation analysis. For each document we collect catchphrases, citations sentences, citation catchphrases and citation classes.
16. YouTube Multiview Video Games Dataset: This dataset contains about 120k instances, each described by 13 feature types, with class information, specially useful for exploring multiview topics (cotraining, ensembles, clustering,..).
17. Dresses_Attribute_Sales: This dataset contain Attributes of dresses and their recommendations according to their sales.Sales are monitor on the basis of alternate days.
18. Twitter Data set for Arabic Sentiment Analysis: This problem of Sentiment Analysis (SA) has been studied well on the English language but not Arabic one. Two main approaches have been devised: corpus-based and lexicon-based.
19. Sentence Classification: Contains sentences from the abstract and introduction of 30 articles annotated with a modified Argumentative Zones annotation scheme. These articles come from biology, machine learning and psychology.
20. microblogPCU: MicroblogPCU data is crawled from sina weibo microblog[http://weibo.com/]. This data can be used to study machine learning methods as well as do some social network research.
21. Sentiment Labelled Sentences: The dataset contains sentences labelled with positive or negative sentiment.
22. Miskolc IIS Hybrid IPS: The dataset was created for the comparison and evaluation of hybrid indoor positioning methods. The dataset presented contains data from W-LAN and Bluetooth interfaces, and Magnetometer.
23. DeliciousMIL: A Data Set for Multi-Label Multi-Instance Learning with Instance Labels: This dataset includes 1) 12234 documents (8251 training, 3983 test) extracted from DeliciousT140 dataset, 2) class labels for all documents, 3) labels for a subset of sentences of the test documents.
24. TTC-3600: Benchmark dataset for Turkish text categorization: The TTC-3600 data set is a collection of Turkish news and articles including categorized 3,600 documents from 6 well-known portals in Turkey. It has 4 different forms in ARFF Weka format.
25. YouTube Spam Collection: It is a public set of comments collected for spam research. It has five datasets composed by 1,956 real messages extracted from five videos that were among the 10 most viewed on the collection period.
26. KDC-4007 dataset Collection: KDC-4007 dataset Collection is the Kurdish Documents Classification text used in categories regarding Kurdish Sorani news and articles.
27. Burst Header Packet (BHP) flooding attack on Optical Burst Switching (OBS) Network: One of the primary challenges in identifying the risks of the Burst Header Packet (BHP) flood attacks in Optical Burst Switching networks (OBS) is the scarcity of reliable historical data.
28. University of Tehran Question Dataset 2016 (UTQD.2016): Persian questions gathered from a jeopardy game broadcasted on Iranian national television.
29. Paper Reviews: This sentiment analysis data set contains scientific paper reviews from an international conference on computing and informatics. The task is to predict the orientation or the evaluation of a review.
30. Sports articles for objectivity analysis: 1000 sports articles were labeled using Amazon Mechanical Turk as objective or subjective. The raw texts, extracted features, and the URLs from which the articles were retrieved are provided.
31. Victorian Era Authorship Attribution: To create the largest authorship attribution dataset, we extracted works of 50 well-known authors. To have a non-exhaustive learning, in training there are 45 authors whereas, in the testing, it's 50
32. Multimodal Damage Identification for Humanitarian Computing: 5879 captioned images (image and text) from social media related to damage during natural disasters/wars, and belong to 6 classes: Fires, Floods, Natural landscape, Infrastructural, Human, Non-damage.
33. BuddyMove Data Set: User interest information extracted from user reviews published in holidayiq.com about various types of point of interests in South India
34. Roman Urdu Data Set: Roman Urdu (the scripting style for Urdu language) is one of the limited resource languages.A data corpus comprising of more than 20000 records was collected.
35. Drug Review Dataset (Druglib.com): The dataset provides patient reviews on specific drugs along with related conditions. Reviews and ratings are grouped into reports on the three aspects benefits, side effects and overall comment.
36. Drug Review Dataset (Drugs.com): The dataset provides patient reviews on specific drugs along with related conditions and a 10 star patient rating reflecting overall patient satisfaction.
37. Travel Reviews: Reviews on destinations in 10 categories mentioned across East Asia. Each traveler rating is mapped as Excellent(4), Very Good(3), Average(2), Poor(1), and Terrible(0) and average rating is used.
38. Tarvel Review Ratings: Google reviews on attractions from 24 categories across Europe are considered. Google user rating ranges from 1 to 5 and average user rating per category is calculated.
39. Turkish Spam V01: The TurkishSpam data set contains spam and normal emails written in Turkish.
40. Detect Malware Types: Provide a short description of your data set (less than 200 characters).
41. Opinion Corpus for Lebanese Arabic Reviews (OCLAR): Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on services’ reviews, including hotels, restaurants, shops, and others.
42. Youtube cookery channels viewers comments in Hinglish: The datasets are taken from top 2 Indian cooking channel named Nisha Madhulika channel and Kabita’s Kitchen channel.
The data set is in Hinglish Language.
43. Online Retail II: A real online retail transaction data set of two years.
44. A study of Asian Religious and Biblical Texts: Mainly from Project Gutenberg, we combine Upanishads, Yoga Sutras, Buddha Sutras, Tao Te Ching and Book of Wisdom, Book of Proverbs, Book of Ecclesiastes and Book of Ecclesiasticus
45. CLINC150: This is a intent classification (text classification) dataset with 150 in-domain intent classes. The main purpose of this dataset is to evaluate various classifiers on out-of-domain performance.
46. Russian Corpus of Biographical Texts: Sentence classification (Russian). The corpus contains Wikipedia texts splitted into sentences/ Each sentence has a topic label.
47. Guitar Chords finger positions: Position of the fingers for 2633 guitar chords in standard tuning (double checked with software)