Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact

Repository Web            Google
View ALL Data Sets

× Check out the beta version of the new UCI Machine Learning Repository we are currently testing! Contact us if you have any issues, questions, or concerns. Click here to try out the new site.

Chess (King-Rook vs. King) Data Set
Download: Data Folder, Data Set Description

Abstract: Chess Endgame Database for White King and Rook against Black King (KRK).

Data Set Characteristics:  


Number of Instances:




Attribute Characteristics:

Categorical, Integer

Number of Attributes:


Date Donated


Associated Tasks:


Missing Values?


Number of Web Hits:




Database generated by Michael Bain and Arthur van Hoff at the Turing Institute, Glasgow, UK.


Michael Bain (mike '@', AI Lab, Computer Science
University of New South Wales, Sydney 2052, Australia.
(tel) +61 2 385 3939
(fax) +61 2 663 4576

Data Set Information:

An Inductive Logic Programming (ILP) or relational learning framework is assumed (Muggleton, 1992). The learning system is provided with examples of chess positions described only by the coordinates of the pieces on the board. Background knowledge in the form of row and column differences is also supplied. The relations necessary to form a correct and concise classifier for the target concept must be discovered by the learning system (the examples already provide a complete extensional definition). The task is closely related to Quinlan's (1983) application of ID3 to classify White King and Rook against Black King and Knight (KRKN) positions as lost 2-ply or lost 3-ply. The framework is similar in that the example positions supply only low-grade data. An important difference is that additional background predicates of the kind supplied in the KRKN study via hand-crafted attributes are not provided for this KRK domain.

Chess endgames are complex domains which are enumerable. Endgame databases are tables of stored game-theoretic values for the enumerated elements (legal positions) of the domain. The game-theoretic values stored denote whether or not positions are won for either side, or include also the depth of win (number of moves) assuming minimax-optimal play. From the point of view of experiments on computer induction such databases provide not only a source of examples but also an oracle (Roycroft, 1986) for testing induced rules. However a chess endgame database differs from, say, a relational database containing details of parts and suppliers in the following important respect. The combinatorics of computing the required game-theoretic values for individual position entries independently would be prohibitive. Therefore all the database entries are generated in a single iterative process using the ``standard backup'' algorithm (Thompson, 1986).

A KRK database was described by Clarke (1977). The current database was described and used for machine learning experiments in Bain (1992; 1994). It should be noted that our database is not guaranteed correct, but the class distribution is the same as Clarke's database. In (Bain 1992; 1994) the task was classification of positions in the database as won for white in a fixed number of moves, assuming optimal play by both sides. The problem was structured into separate sub-problems by depth-of-win ordered draw, zero, one, ..., sixteen. When learning depth d all examples at depths > d are used as negatives. Quinlan (1994) applied Foil to learn a complete and correct solution for this task.

The typical complexity of induced classifiers in this domain suggest that the task is demanding when background knowledge is restricted.

Attribute Information:

1. White King file (column)
2. White King rank (row)
3. White Rook file
4. White Rook rank
5. Black King file
6. Black King rank
7. optimal depth-of-win for White in 0 to 16 moves, otherwise drawn {draw, zero, one, two, ..., sixteen}.

Relevant Papers:

M. Bain. "Learning optimal chess strategies", ILP 92: ICOT TM-1182, S. Muggleton, Institute for New Generation Computer Technology, Tokyo, Japan.
[Web Link]

M. Bain. Learning Logical Exceptions in Chess. PhD dissertation. University of Strathclyde. 1994.
[Web Link]

M. R. B. Clarke. A Quantitative Study of King and Pawn Against King. Advances in Computer Chess, 1, 108-110. M. R. B. Clarke, ed. Edinburgh University Press. Edinburgh. 1977
[Web Link]

S. Muggleton. Inductive Logic Programming, 3--27. S. Muggleton, ed. Academic Press, London, 1992.
[Web Link]

J. R. Quinlan. Learning Efficient Classification Procedures and their Application to Chess End Games.Machine Learning: An Artificial Intelligence Approach. 464--482. R. Michalski and J. Carbonnel and T. Mitchell, eds. Tioga, 1983. Palo Alto, CA.
[Web Link]

A. J. Roycroft. Database "Oracles'': Necessary and desirable features. International Computer Chess Association Journal. 8, 2, 1986. 100--104.
[Web Link]

K. Thompson. Retrograde Analysis of Certain Endgames.International Computer Chess Association Journal. 8, 3, 1986. 131-139.
[Web Link]

Papers That Cite This Data Set1:

Manuel Oliveira. Library Release Form Name of Author: Stanley Robson de Medeiros Oliveira Title of Thesis: Data Transformation For Privacy-Preserving Data Mining Degree: Doctor of Philosophy Year this Degree Granted. University of Alberta Library. 2005. [View Context].

Marcus Hutter and Marco Zaffalon. Distribution of Mutual Information from Complete and Incomplete Data. CoRR, csLG/0403025. 2004. [View Context].

Ira Cohen and Fabio Gagliardi Cozman and Nicu Sebe and Marcelo Cesar Cirelo and Thomas S. Huang. Semisupervised Learning of Classifiers: Theory, Algorithms, and Their Application to Human-Computer Interaction. IEEE Trans. Pattern Anal. Mach. Intell, 26. 2004. [View Context].

Douglas Burdick and Manuel Calimlim and Jason Flannick and Johannes Gehrke and Tomi Yiu. MAFIA: A Performance Study of Mining Maximal Frequent Itemsets. FIMI. 2003. [View Context].

Russell Greiner and Wei Zhou. Structural Extension to Logistic Regression: Discriminative Parameter Learning of Belief Net Classifiers. AAAI/IAAI. 2002. [View Context].

Tanzeem Choudhury and James M. Rehg and Vladimir Pavlovic and Alex Pentland. Boosting and Structure Learning in Dynamic Bayesian Networks for Audio-Visual Speaker Detection. ICPR (3). 2002. [View Context].

Marco Zaffalon and Marcus Hutter. Robust Feature Selection by Mutual Information Distributions. CoRR, csAI/0206006. 2002. [View Context].

Michael G. Madden. Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm. CoRR, csLG/0211003. 2002. [View Context].

James Bailey and Thomas Manoukian and Kotagiri Ramamohanarao. Fast Algorithms for Mining Emerging Patterns. PKDD. 2002. [View Context].

Jie Cheng and Russell Greiner. Learning Bayesian Belief Network Classifiers: Algorithms and System. Canadian Conference on AI. 2001. [View Context].

Boonserm Kijsirikul and Sukree Sinthupinyo and Kongsak Chongkasemwongse. Approximate Match of Rules Using Backpropagation Neural Networks. Machine Learning, 44. 2001. [View Context].

Jinyan Li and Guozhu Dong and Kotagiri Ramamohanarao and Limsoon Wong. DeEPs: A New Instance-based Discovery and Classification System. Proceedings of the Fourth European Conference on Principles and Practice of Knowledge Discovery in Databases. 2001. [View Context].

Jinyan Li and Guozhu Dong and Kotagiri Ramamohanarao. Instance-Based Classification by Emerging Patterns. PKDD. 2000. [View Context].

Mark A. Hall. Department of Computer Science Hamilton, NewZealand Correlation-based Feature Selection for Machine Learning. Doctor of Philosophy at The University of Waikato. 1999. [View Context].

Yk Huhtala and Juha Kärkkäinen and Pasi Porkka and Hannu Toivonen. Efficient Discovery of Functional and Approximate Dependencies Using Partitions. ICDE. 1998. [View Context].

Adam J. Grove and Dale Schuurmans. Boosting in the Limit: Maximizing the Margin of Learned Ensembles. AAAI/IAAI. 1998. [View Context].

Ron Kohavi. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. KDD. 1996. [View Context].

Brian R. Gaines. Structured and Unstructured Induction with EDAGs. KDD. 1995. [View Context].

Ron Kohavi and Dan Sommerfield. Feature Subset Selection Using the Wrapper Method: Overfitting and Dynamic Search Space Topology. KDD. 1995. [View Context].

Grigorios Tsoumakas and Ioannis P. Vlahavas. Fuzzy Meta-Learning: Preliminary Results. Greek Secretariat for Research and Technology. [View Context].

Nikunj C. Oza and Stuart J. Russell. Online Bagging and Boosting. Computer Science Division University of California. [View Context].

Hankil Yoon and Khaled A. Alsabti and Sanjay Ranka. Tree-based Incremental Classification for Large Datasets. CISE Department, University of Florida. [View Context].

Omid Madani and David M. Pennock and Gary William Flake. Co-Validation: Using Model Disagreement to Validate Classification Algorithms. Yahoo! Research Labs. [View Context].

M. A. Galway and Michael G. Madden. DEPARTMENT OF INFORMATION TECHNOLOGY technical report NUIG-IT-011002 Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm. Department of Information Technology National University of Ireland, Galway. [View Context].

BayesianClassifi552 Pat Langley and Wayne Iba. In Proceedings of the Tenth National ConferenceonArtifi256 Intelligence( 42840. Lambda Kevin Thompson. [View Context].

Jerome H. Friedman and Ron Kohavi and Youngkeol Yun. To appear in AAAI-96 Lazy Decision Trees. Statistics Department and Stanford Linear Accelerator Center Stanford University. [View Context].

Citation Request:

Please refer to the Machine Learning Repository's citation policy

[1] Papers were automatically harvested and associated with this data set, in collaboration with

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML