Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact

Repository Web            Google
View ALL Data Sets

× Check out the beta version of the new UCI Machine Learning Repository we are currently testing! Contact us if you have any issues, questions, or concerns. Click here to try out the new site.

Chess (King-Rook vs. King-Knight) Data Set
Download: Data Folder, Data Set Description

Abstract: Knight Pin Chess End-Game Database Creator

Data Set Characteristics:  

Multivariate, Data-Generator

Number of Instances:




Attribute Characteristics:

Categorical, Integer

Number of Attributes:


Date Donated


Associated Tasks:


Missing Values?


Number of Web Hits:



Database originally described by Ross Quinlan.


Jeff Schlimmer (Jeff.Schlimmer '@'

Data Set Information:

The companion file is a Common Lisp demonstration file that generates knight-pin Chess end-game samples. Start up Lisp and load the file. It generates 100 end-games and writes them to a separate file. Look at the end of the file to see how to change it so that it will produce more end-games, or use the file for output that you wish.

The code is released for experimental, confidential use only. See the end of the file for load-time commands that generate a file of examples in Quinlan's format.

Note: this program generates duplicates. In one run, there were about 370 duplicates in the first 1000 instances (i.e., 630 distinct examples).

Attribute Information:

Attribute Summaries:
Class: knight's side is lost in n-ply (n=2, 3, etc)
1. distance from black king to knight: 1, 2, >2
2. distance from black king to rook: 1, 2, >2
3. distance from black king to white king: 1, 2, >2
4. distance from white king to knight: 1, 2, >2
5. distance from white king to rook: 1, 2, >2
6. distance from rook to knight (ADDED): 1, 2, >2
7. board relationship of black king and knight (ADDED): diag, rect, other
8. board relationship of black king and rook (ADDED): diag, rect, other
9. board relationship of black king and white king (ADDED): diag,rect,other
10. board relationship of white king and knight (ADDED): diag, rect, other
11. board relationship of white king and rook (ADDED): diag, rect, other
12. board relationship of white rook and knight (ADDED): diag, rect, other
13. type of black king's initial square: corner, edge, open
14. type of black knight's initial square (ADDED): corner, edge, open
15. type of white king's initial square (ADDED): corner, edge, open
16. type of white rook's initial square (ADDED): corner, edge, open
17. rook checks black king (OMITTED, always f): t, f
18. rook threatens knight (OMITTED, always t): t, f
19. knight threatens rook (OMITTED, always f): t, f
20. black king, knight, rook in line (OMITTED, always t) t, f
21. black king can move adjacent to knight (OMITTED) t, f
22. knight can interpose adjacent to king (OMITTED) t, f

Relevant Papers:

Quinlan, J.R. (1983). Learning Efficient Classification Procedures and Their Application to Chess End Games. In R.S. Michalski, J.G. Carbonell, & T.M. Mitchell (Eds.), Machine Learning -- An Artificial Intelligence Approach, 463-482, Palo Alto: Tioga.
[Web Link]

Papers That Cite This Data Set1:

Manuel Oliveira. Library Release Form Name of Author: Stanley Robson de Medeiros Oliveira Title of Thesis: Data Transformation For Privacy-Preserving Data Mining Degree: Doctor of Philosophy Year this Degree Granted. University of Alberta Library. 2005. [View Context].

Ira Cohen and Fabio Gagliardi Cozman and Nicu Sebe and Marcelo Cesar Cirelo and Thomas S. Huang. Semisupervised Learning of Classifiers: Theory, Algorithms, and Their Application to Human-Computer Interaction. IEEE Trans. Pattern Anal. Mach. Intell, 26. 2004. [View Context].

Marcus Hutter and Marco Zaffalon. Distribution of Mutual Information from Complete and Incomplete Data. CoRR, csLG/0403025. 2004. [View Context].

Douglas Burdick and Manuel Calimlim and Jason Flannick and Johannes Gehrke and Tomi Yiu. MAFIA: A Performance Study of Mining Maximal Frequent Itemsets. FIMI. 2003. [View Context].

Tanzeem Choudhury and James M. Rehg and Vladimir Pavlovic and Alex Pentland. Boosting and Structure Learning in Dynamic Bayesian Networks for Audio-Visual Speaker Detection. ICPR (3). 2002. [View Context].

Marco Zaffalon and Marcus Hutter. Robust Feature Selection by Mutual Information Distributions. CoRR, csAI/0206006. 2002. [View Context].

Michael G. Madden. Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm. CoRR, csLG/0211003. 2002. [View Context].

James Bailey and Thomas Manoukian and Kotagiri Ramamohanarao. Fast Algorithms for Mining Emerging Patterns. PKDD. 2002. [View Context].

Russell Greiner and Wei Zhou. Structural Extension to Logistic Regression: Discriminative Parameter Learning of Belief Net Classifiers. AAAI/IAAI. 2002. [View Context].

Boonserm Kijsirikul and Sukree Sinthupinyo and Kongsak Chongkasemwongse. Approximate Match of Rules Using Backpropagation Neural Networks. Machine Learning, 44. 2001. [View Context].

Jinyan Li and Guozhu Dong and Kotagiri Ramamohanarao and Limsoon Wong. DeEPs: A New Instance-based Discovery and Classification System. Proceedings of the Fourth European Conference on Principles and Practice of Knowledge Discovery in Databases. 2001. [View Context].

Jie Cheng and Russell Greiner. Learning Bayesian Belief Network Classifiers: Algorithms and System. Canadian Conference on AI. 2001. [View Context].

Jinyan Li and Guozhu Dong and Kotagiri Ramamohanarao. Instance-Based Classification by Emerging Patterns. PKDD. 2000. [View Context].

Mark A. Hall. Department of Computer Science Hamilton, NewZealand Correlation-based Feature Selection for Machine Learning. Doctor of Philosophy at The University of Waikato. 1999. [View Context].

Yk Huhtala and Juha Kärkkäinen and Pasi Porkka and Hannu Toivonen. Efficient Discovery of Functional and Approximate Dependencies Using Partitions. ICDE. 1998. [View Context].

Adam J. Grove and Dale Schuurmans. Boosting in the Limit: Maximizing the Margin of Learned Ensembles. AAAI/IAAI. 1998. [View Context].

Ron Kohavi. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. KDD. 1996. [View Context].

Ron Kohavi and Dan Sommerfield. Feature Subset Selection Using the Wrapper Method: Overfitting and Dynamic Search Space Topology. KDD. 1995. [View Context].

Brian R. Gaines. Structured and Unstructured Induction with EDAGs. KDD. 1995. [View Context].

Hankil Yoon and Khaled A. Alsabti and Sanjay Ranka. Tree-based Incremental Classification for Large Datasets. CISE Department, University of Florida. [View Context].

Omid Madani and David M. Pennock and Gary William Flake. Co-Validation: Using Model Disagreement to Validate Classification Algorithms. Yahoo! Research Labs. [View Context].

M. A. Galway and Michael G. Madden. DEPARTMENT OF INFORMATION TECHNOLOGY technical report NUIG-IT-011002 Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm. Department of Information Technology National University of Ireland, Galway. [View Context].

BayesianClassifi552 Pat Langley and Wayne Iba. In Proceedings of the Tenth National ConferenceonArtifi256 Intelligence( 42840. Lambda Kevin Thompson. [View Context].

Jerome H. Friedman and Ron Kohavi and Youngkeol Yun. To appear in AAAI-96 Lazy Decision Trees. Statistics Department and Stanford Linear Accelerator Center Stanford University. [View Context].

Grigorios Tsoumakas and Ioannis P. Vlahavas. Fuzzy Meta-Learning: Preliminary Results. Greek Secretariat for Research and Technology. [View Context].

Nikunj C. Oza and Stuart J. Russell. Online Bagging and Boosting. Computer Science Division University of California. [View Context].

Citation Request:

Please refer to the Machine Learning Repository's citation policy

[1] Papers were automatically harvested and associated with this data set, in collaboration with

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML