Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact

Repository Web            Google
View ALL Data Sets

Primary Tumor Data Set

Below are papers that cite this data set, with context shown. Papers were automatically harvested and associated with this data set, in collaboration with

Return to Primary Tumor data set page.

Remco R. Bouckaert. Accuracy bounds for ensembles under 0 { 1 loss. Xtal Mountain Information Technology & Computer Science Department, University of Waikato. 2002.

voting respectively. Also the line A = # A and the approximation (8) are plotted. All datapoints are between those two lines, except for the datapoint for primary tumor To get more datapoints, 110 datasets were generated randomly generating a Bayesian network, populating it with randomly selected probability tables. The data was generated by instantiating the variables one by one according to the

Xavier Llor and David E. Goldberg and Ivan Traus and Ester Bernad i Mansilla. Accuracy, Parsimony, and Generality in Evolutionary Learning Systems via Multiobjective Selection. IWLCS. 2002.

Bupa Liver Disorders (bpa), Wisconsin Breast Cancer (bre), Glass (gls), Ionosphere (ion), Iris (irs), Primary Tumor (prt), and Sonar (son). These data sets contain categorical and numeric attributes, as well as binary and n-ary classification tasks. We also run several evolutionary and non-evolutionary classifier schemes on the previous data sets. The

Igor Kononenko and Edvard Simec and Marko Robnik-Sikonja. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell, 7. 1997.

This corresponds to the estimates by RELIEFF with very large number of nearest hits/misses. To test the effect of the normalization factor in eq. (3) we run RELIEFF also on one well known medical data set, ` primary tumor ', described in 6 THE AUTHORS??? Section 5.3. The major difference between the estimates by impurity functions and the estimates by RELIEFF in the ``primary tumor'' problem is in

Pedro Domingos. Control-Sensitive Feature Selection for Lazy Learners. Artif. Intell. Rev, 11. 1997.

used in the empirical study, in particular M. Zwitter and M. Soklic of the University Medical Centre, Ljubljana, for supplying the lymphography, breast cancer and primary tumor datasets, and Robert Detrano, of the V.A. Medical Center, Long Beach and Cleveland Clinic Foundation, for supplying the heart disease dataset. Please see the documentation in the UCI Repository for detailed

Kamal Ali and Michael J. Pazzani. Error Reduction through Learning Multiple Descriptions. Machine Learning, 24. 1996.

uniformly weighted voting between eleven, stochastically-generated descriptions is only one seventh that of the error obtained by using a single description. On the other hand, on the primary tumor data set, the error obtained by the identical multiple models procedure is the same as that obtained by using a single description. Much of the work on learning multiple models is motivated by Bayesian

Geoffrey I. Webb. OPUS: An Efficient Admissible Algorithm for Unordered Search. J. Artif. Intell. Res. (JAIR, 3. 1995.

I am especially indebted to my anonymous reviewers whose insightful, extensive and detailed comments greatly improved the quality of this paper. The Breast Cancer, Lymphography and Primary Tumor data sets were provided by the Ljubljana Oncology Institute, Slovenia. Thanks to the UCI Repository, its maintainers, Patrick Murphy and David Aha, and its donors, for providing access to the data sets used

Geoffrey I Webb. Learning Decision Lists by Prepending Inferred Rules. School of Computing and Mathematics Deakin University.

supported by the Australian Research Council. I am grateful to Mike Cammeron-Jones for discussions that helped refine the ideas presented herein. The Breast Cancer, Lymphography and Primary Tumor data sets were compiled by M. Zwitter and M. Soklic at University Medical Centre, Institute of Oncology, Ljubljana, Yugoslavia. The Audiology data set was compiled by Professor Jergen at Baylor College of

Alexander K. Seewald. Dissertation Towards Understanding Stacking Studies of a General Ensemble Learning Scheme ausgefuhrt zum Zwecke der Erlangung des akademischen Grades eines Doktors der technischen Naturwissenschaften.

#26 Training set (8-9=CV, 7=75%, 6=62%,.. 1=25%) Hold-out accuracy Figure 6.4: Learning curves for dataset primary tumor to zoo. 58 Chapter 7 Towards a Theoretical Framework In this chapter, we show that the ensemble learning scheme Stacking is universal in the sense that most ensemble learning schemes

Return to Primary Tumor data set page.

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML