Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

Nursery Data Set

Below are papers that cite this data set, with context shown. Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.

Return to Nursery data set page.


Daniel J. Lizotte and Omid Madani and Russell Greiner. Budgeted Learning of Naive-Bayes Classifiers. UAI. 2003.

75 times, while a non-discriminative feature such as feature 18 is bought an average of only 2 times. For some budgets, the 0/1 error of SFL is nearly half that generated by round robin. The nursery dataset (Figure 4(c)) is a five class problem with nine features that can take on between two and five values. The relative performances of the policies are closer to each other, but their behaviour is


Michael G. Madden. Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm. CoRR, csLG/0211003. 2002.

one for each of the analyses described above in Section 4.2. ROC graphs are best suited to two-class problems, which all but one of the datasets are. For the Nursery dataset, the ROC curve is for the prediction of the `Priority' class. On a ROC graph, the point (0 0) represents the strategy of never returning a positive classification, no


Marina Meila and Michael I. Jordan. Learning with Mixtures of Trees. Journal of Machine Learning Research, 1. 2000.

the MT has m = 12, and the MF has m = 30. (b) On the NURSERY data set; the MT has m = 30, the MF has m = 70. TANB and NB are the tree augmented naive Bayes and the naive Bayes classifiers respectively. The plots show the average and standard deviation test set error


Jinyan Li and Guozhu Dong and Kotagiri Ramamohanarao. Instance-Based Classification by Emerging Patterns. PKDD. 2000.

(as explained in [10]). Note that for the datasets such as chess, flare, nursery splice, mushroom, voting, soybean-l, t-t-t, and zoo which do not contain any continuous attributes, DeEPs does not need ff. Columns 5, 6, 7, 8, and 9 give the


Jie Cheng and Russell Greiner. Comparing Bayesian Network Classifiers. UAI. 1999.

and GBN and TAN each did best on two of the datasets. On the data sets Nursery and "Car", the GBN classifier was inferior to the Na´ve-Bayes. The reason is, in both cases the GBN actually reduced to the Na´ve-Bayes with missing links (the reduced


Nikunj C. Oza and Stuart J. Russell. Online Bagging and Boosting. Computer Science Division University of California.

AdaBoost and online boosting performed significantly better than Naive Bayes. On the Nursery dataset, AdaBoost performed significantly better and online boosting performed marginally better. 5 Conclusions The paper has described online versions of the popular bagging and boosting algorithms and has


Daniel J. Lizotte and Omid Madani and Russell Greiner. Budgeted Learning, Part II: The Na#ve-Bayes Case. Department of Computing Science University of Alberta.

75 times, while non-discriminative features such a feature 18 is queried an average of 2 times only. For some budgets, the 0/1 error of SFL is nearly half that generated by round robin. The nursery dataset (Figure 4(b)) is a five class problem with nine features that can take on between two and five values. The relative performances of the policies are closer to each other, but their behaviour is


Shi Zhong and Weiyu Tang and Taghi M. Khoshgoftaar. Boosted Noise Filters for Identifying Mislabeled Data. Department of Computer Science and Engineering Florida Atlantic University.

in Table 1. Overall, BBF-I significantly outperforms BBF-II, except for low ( 20%) noise levels for the adult, car, and nursery datasets. The reason BBF-II performs poorly may be that too many clean instances are weighted low. The noise filter constructed in the next round loses strong support from clean data instances, which are


M. A. Galway and Michael G. Madden. DEPARTMENT OF INFORMATION TECHNOLOGY technical report NUIG-IT-011002 Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm. Department of Information Technology National University of Ireland, Galway.

one for each of the analyses described above in Section 4.2. ROC graphs are best suited to two-class problems, which all but one of the datasets are. For the Nursery dataset, the ROC curve is for the prediction of the `Priority' class. On a ROC graph, the point (0 0) represents the strategy of never returning a positive classification, no


Daniel J. Lizotte. Library Release Form Name of Author. Budgeted Learning of Naive Bayes Classifiers.

from the UCI Machine Learning Repository [BM98]. These plots show averaged validation error of the policies on a holdout set (20% of the dataset) on the mushroom, nursery and votes datasets. Each point is an average of 50 trials where in each trial a random balanced partition of classes was made for training and validation. The five-fold


Jinyan Li and Kotagiri Ramamohanarao and Guozhu Dong. ICML2000 The Space of Jumping Emerging Patterns and Its Incremental Maintenance Algorithms. Department of Computer Science and Software Engineering, The University of Melbourne, Parkville.

in UCI repository (Blake & Murphy, 1998) to experimentally examine the maintenance algorithms, especially their efficiency. These data sets are mushroom, pima, tic-tac-toe, and nursery More details can be seen in Table 1. Note that the continuous attributes in the pima data set are discretized by MLC++ techniques (Kohavi et al, 1994).


Gustavo E. A and Gustavo E A P A Batista and Ronaldo C. Prati and Maria Carolina Monard. A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data. Instituto de Ci ^ encias Matem aticas e de Computac~ ao.

However, in spite of a large degree of imbalance the data sets Letter-a and Nursery obtained almost 100% AUC. The results obtained in the UCI data sets seem to be compatible with previous work of the authors [18] conducted on a series of experiments with


Return to Nursery data set page.

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML