MONK's Problems

Donated on 9/30/1992

A set of three artificial domains over the same attribute space; Used to test a wide range of induction algorithms

Dataset Characteristics

Multivariate

Subject Area

Other

Associated Tasks

Classification

Feature Type

Categorical

# Instances

432

# Features

6

Dataset Information

Additional Information

The MONK's problem were the basis of a first international comparison of learning algorithms. The result of this comparison is summarized in "The MONK's Problems - A Performance Comparison of Different Learning algorithms" by S.B. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang has been published as Technical Report CS-CMU-91-197, Carnegie Mellon University in Dec. 1991. One significant characteristic of this comparison is that it was performed by a collection of researchers, each of whom was an advocate of the technique they tested (often they were the creators of the various methods). In this sense, the results are less biased than in comparisons performed by a single person advocating a specific learning method, and more accurately reflect the generalization behavior of the learning techniques as applied by knowledgeable users. There are three MONK's problems. The domains for all MONK's problems are the same (described below). One of the MONK's problems has noise added. For each problem, the domain has been partitioned into a train and test set.

Has Missing Values?

No

Introductory Paper

Variables Table

Variable NameRoleTypeDescriptionUnitsMissing Values
classTargetBinaryno
a1FeatureIntegerno
a2FeatureIntegerno
a3FeatureIntegerno
a4FeatureIntegerno
a5FeatureIntegerno
a6FeatureIntegerno
IDIDCategoricalno

0 to 8 of 8

Additional Variable Information

1. class: 0, 1 2. a1: 1, 2, 3 3. a2: 1, 2, 3 4. a3: 1, 2 5. a4: 1, 2, 3 6. a5: 1, 2, 3, 4 7. a6: 1, 2 8. Id: (A unique symbol for each instance)

Baseline Model Performance

Papers Citing this Dataset

A Stable Combinatorial Particle Swarm Optimization for Scalable Feature Selection in Gene Expression Data

By Hassen Dhrif, Luis Giraldo, Miroslav Kubat, Stefan Wuchty. 2019

Published in ArXiv.

Feature Selection by Computing Mutual Information Based on Partitions

By Chengxiang Yin, Hongjun Zhang, Rui Zhang, Zilin Zeng, Xiuli Qi, Yuntian Feng. 2018

Published in IEICE Transactions.

Feature selection for fault detection systems: application to the Tennessee Eastman process

By Brigitte Chebel-Morello, Simon Malinowski, Hafida Senoussi. 2015

Published in Applied Intelligence.

Graphic-theoretic distributed inference in social networks

By Mohammadreza Doostmohammadian, Usman Khan. 2014

Published in ArXiv.

Conceptual Equivalence for Contrast Mining in Classification Learning

By Ying Yang, Xindong Wu, Xingquan Zhu. 2008

Published in Data Knowl. Eng..

0 to 5 of 9

Reviews

There are no reviews for this dataset yet.

Login to Write a Review
Download
9 citations
25239 views

Creators

J. Wnek

License

By using the UCI Machine Learning Repository, you acknowledge and accept the cookies and privacy practices used by the UCI Machine Learning Repository.

Read Policy