Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

Balance Scale Data Set

Below are papers that cite this data set, with context shown. Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.

Return to Balance Scale data set page.


Jianbin Tan and David L. Dowe. MML Inference of Decision Graphs with Multi-way Joins and Dynamic Attributes. Australian Conference on Artificial Intelligence. 2003.

which are treated as unordered discrete attributes in our tests because data files from UCI have been set in this format. We performed 10 independent 10-fold cross-validations again on the data set. balance scale data set: The balance scale data set from the UCI repository [1] was generated to model psychological experimental results. There are 625 instances with 3 output classes in the set.


Zhi-Hua Zhou and Yuan Jiang and Shifu Chen. Extracting symbolic rules from trained neural network ensembles. AI Commun, 16. 2003.

is tidied and the extraction process terminates. In summary, the flowchart of REFNE is depicted in Fig. 1. 4. Experiments 4.1. Summary In this section, we report the experiments on REFNE on six UCI data sets [4] including balance scale congressional voting records, hepatitis, iris plant, statlog australian credit approval, and statlog german credit. Since we do not want to test the ability of REFNE in


Peter Sykacek and Stephen J. Roberts. Adaptive Classification by Variational Kalman Filtering. NIPS. 2002.

sequential variational inference. The probability of the null hypothesis, # ####### , that both classifiers are equal suggests that only the differences for the Balance scale and the Pima Indian data sets are significant, with either method being better in one case. Since the generalization accuracies of both methods are almost identical, we conclude that if applied to 2 Vehicle data was donated to


Remco R. Bouckaert. Accuracy bounds for ensembles under 0 { 1 loss. Xtal Mountain Information Technology & Computer Science Department, University of Waikato. 2002.

of 100 cases were generated and the cardinality of the variables was varied from 2 to 12, 3 Weka can be obtained from http://www.cs.waikato.ac.nz/ml/ 4 The following datasets were used: autos, balance scale breast-cancer, breast-w, horsecolic, credit-rating, german-credit, pima-diabetes, glass, heart-c, heart-h, heart-statlog, hepatitis, iris, labor, lymphography,


Nir Friedman and Moisés Goldszmidt and Thomas J. Lee. Bayesian Network Classification with Continuous Attributes: Getting the Best of Both Discretization and Parametric Fitting. ICML. 1998.

experimental results show, the additional flexibility of the mixture results in drastically improved performance in the cases where the Gaussian TAN did poorly (see, for example, the accuracy of the data sets "anneal-U" and balance scale in Table 1). In this paper, we learned mixtures only when modeling a continuous feature with discrete parents. We note, however, that learning a mixture of linear


Alexander K. Seewald. Dissertation Towards Understanding Stacking Studies of a General Ensemble Learning Scheme ausgefuhrt zum Zwecke der Erlangung des akademischen Grades eines Doktors der technischen Naturwissenschaften.

the baseline accuracy is already 66.7%. Interestingly in this case the best model is from DecisionStump which learns a single J48 node, obtaining 88.9% accuracy, corresponding to a single error on dataset balance scale It seems J48 is prone to overfitting on this meta dataset. The training set model is based on meanAbsSkew. All but two times, the following model appears: meanAbsSkew <= 0.31 : class


Hirotaka Inoue and Hiroyuki Narihisa. Experiments with an Ensemble Self-Generating Neural Network. Okayama University of Science.

The average classification accuracy of ten trials for single SGNN, ESGNN, nearest neighbor (1-NN), and 3-nearest neighbor (3-NN) with shuffling. The standard deviation is given inside the bracket. Dataset SGNN ESGNN 1-NN 3-NN balance scale 0.781(0.053) 0.843(0.059) 0.771(0.057) 0.816(0.049) breast-cancer-w 0.954(0.020) 0.967(0.023) 0.954(0.025) 0.963(0.024) glass 0.632(0.102) 0.692(0.075)


Alexander K. Seewald. Meta-Learning for Stacked Classification. Austrian Research Institute for Artificial Intelligence.

the baseline accuracy is already 66.7%. Interestingly in this case the best model is from DecisionStump which learns a single J48 node, obtaining 88.9% accuracy, corresponding to a single error on dataset balance scale It seems J48 is prone to overfitting on this meta-dataset. The training set model is based on meanAbsSkew. The models from the nine folds are more diverse: seven times, the following


Return to Balance Scale data set page.

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML