1. 3D Road Network (North Jutland, Denmark): 3D road network with highly accurate elevation information (+-20cm) from Denmark used in eco-routing and fuel/Co2-estimation routing algorithms.
2. Amazon Commerce reviews set: The dataset is used for authorship identification in online Writeprint which is a new research field of pattern recognition.
3. Burst Header Packet (BHP) flooding attack on Optical Burst Switching (OBS) Network: One of the primary challenges in identifying the risks of the Burst Header Packet (BHP) flood attacks in Optical Burst Switching networks (OBS) is the scarcity of reliable historical data.
4. CLINC150: This is a intent classification (text classification) dataset with 150 in-domain intent classes. The main purpose of this dataset is to evaluate various classifiers on out-of-domain performance.
5. DeliciousMIL: A Data Set for Multi-Label Multi-Instance Learning with Instance Labels: This dataset includes 1) 12234 documents (8251 training, 3983 test) extracted from DeliciousT140 dataset, 2) class labels for all documents, 3) labels for a subset of sentences of the test documents.
6. Detect Malware Types: Provide a short description of your data set (less than 200 characters).
7. DSRC Vehicle Communications: This set Provides data regarding wireless communications between vehicles and road side units. two separate data sets are provided (normal scenario) and in the presence of attacker (jammer).
8. Farm Ads: This data was collected from text ads found on twelve websites that deal with various farm animal related topics. The binary labels are based on whether or not the content owner approves of the ad.
9. Health News in Twitter: The data was collected in 2015 using Twitter API. This dataset contains health news from more than 15 major health news agencies such as BBC, CNN, and NYT.
10. microblogPCU: MicroblogPCU data is crawled from sina weibo microblog[http://weibo.com/]. This data can be used to study machine learning methods as well as do some social network research.
11. Multimodal Damage Identification for Humanitarian Computing: 5879 captioned images (image and text) from social media related to damage during natural disasters/wars, and belong to 6 classes: Fires, Floods, Natural landscape, Infrastructural, Human, Non-damage.
12. NSF Research Award Abstracts 1990-2003: This data set consists of (a) 129,000 abstracts describing NSF awards for basic research, (b) bag-of-word data files extracted from the abstracts, (c) a list of words used for indexing the bag-of-word
13. NYSK: NYSK (New York v. Strauss-Kahn) is a collection of English news articles about the case relating to allegations of sexual assault against the former IMF director Dominique Strauss-Kahn (May 2011).
14. Reuters-21578 Text Categorization Collection: This is a collection of documents that appeared on Reuters newswire in 1987. The documents were assembled and indexed with categories.
15. sentiment analysis in Saudi Arabia about distance education during Covid-19 : The data were collected from Twitter. About distance education in Saudi Arabia due to Covid-19 pandemic. Data were gathered from Sep 2 to 17, 2020. Arabic Tweets classified as positive or negative.
16. Sentiment Labelled Sentences: The dataset contains sentences labelled with positive or negative sentiment.
17. SMS Spam Collection: The SMS Spam Collection is a public set of SMS labeled messages that have been collected for mobile phone spam research.
18. Turkish Headlines Dataset: Dataset consists of 7 news type labels. These labels are economy, politics, life, technology, magazine, health, sport. This dataset was created by me via Mynet, Milliyet, etc websites.
19. Twenty Newsgroups: This data set consists of 20000 messages taken from 20 newsgroups.
20. Twitter Data set for Arabic Sentiment Analysis: This problem of Sentiment Analysis (SA) has been studied well on the English language but not Arabic one. Two main approaches have been devised: corpus-based and lexicon-based.
21. University of Tehran Question Dataset 2016 (UTQD.2016): Persian questions gathered from a jeopardy game broadcasted on Iranian national television.
22. Victorian Era Authorship Attribution: To create the largest authorship attribution dataset, we extracted works of 50 well-known authors. To have a non-exhaustive learning, in training there are 45 authors whereas, in the testing, it's 50
23. YouTube Comedy Slam Preference Data: This dataset provides user vote data on which video from a pair of videos is funnier collected on YouTube Comedy Slam. The task is to automatically predict this preference based on video metadata.
24. Youtube cookery channels viewers comments in Hinglish: The datasets are taken from top 2 Indian cooking channel named Nisha Madhulika channel and Kabita’s Kitchen channel.
The data set is in Hinglish Language.
25. YouTube Spam Collection: It is a public set of comments collected for spam research. It has five datasets composed by 1,956 real messages extracted from five videos that were among the 10 most viewed on the collection period.