Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

Annealing Data Set

Below are papers that cite this data set, with context shown. Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.

Return to Annealing data set page.


Yuan Jiang and Zhi-Hua Zhou. Editing Training Data for kNN Classifiers with Neural Network Ensemble. ISNN (1). 2004.

Attribute Data set Categorical Continuous Size Class annealing 33 5 798 6 credit 9 6 690 2 glass 0 9 214 7 hayes-roth 4 0 132 3 iris 0 4 150 3 liver 0 6 345 2 pima 0 8 768 2 soybean 35 0 683 19 wine 0 13 178 3 zoo 16


Qingping Tao Ph. D. MAKING EFFICIENT LEARNING ALGORITHMS WITH EXPONENTIALLY MANY FEATURES. Qingping Tao A DISSERTATION Faculty of The Graduate College University of Nebraska In Partial Fulfillment of Requirements. 2004.

(T 0 = n 2 and T s =10n 2 ). M - Metropolis, G - Gibbs, MG - Metropolized Gibbs, PT - Parallel Tempering, BF - Brute Force. Data Sets iris car breast cancer voting auto annealing n 4 6 9 16 25 38 M 5.3 2.1 1.7 0.831.5 5.05.0 2.1 12.8 7.5 1.0 0.7 G 6.7 3.81.9 0.8 30.9 5.5 5.0 2.415.6 7.80.6 0.5 MG 6.0 1.7


Jihoon Yang and Rajesh Parekh and Vasant Honavar. DistAl: An inter-pattern distance-based constructive learning algorithm. Intell. Data Anal, 3. 1999.

TABLE II Comparison of generalization accuracy between various algorithms. DistAl is the results of our approach and NN is the best results in [40]. Dataset DistAl NN Annealing 96.6 96.1 Audiology 66.0 77.5 Bridge 63.0 60.6 Cancer 97.8 95.6 Credit 87.7 81.5 Flag 65.8 58.8 Glass 70.5 72.4 Heart 86.7 83.1 Heart (Cleveland) 85.3 80.2 Heart (Hungary) 85.9


Pedro Domingos. Knowledge Discovery Via Multiple Models. Intell. Data Anal, 2. 1998.

between the learner's bias and that of the probability estimation procedure is important for good results. Disabling the generation of missing values had a large negative impact in the annealing dataset, where very large numbers of missing values are present, and a less discernible one in the datasets where fewer such values occur. C4.5RULES's pruning parameters during the meta-learning phase can


James J. Liu and James Tin and Yau Kwok. An Extended Genetic Rule Induction Algorithm. Department of Computer Science Wuhan University.

368 15 7 2 iris 150 0 4 3 vehicle 846 0 18 4 Table 2: Average accuracies and standard deviations over the ten folds (Numbers in bold indicate the highest accuracy obtained over the four methods). Dataset majority RISE SIA ESIA annealing 76.17#0.06 90.65#0.02 86.53#0.03 93.32 #0.01 australian 55.51#0.04 85.36#0.02 72.46#0.19 80.58#0.10 breast 65.52#0.02 91.85#0.07 84.84#0.02 94.71 #0.04 cleveland


Zhi-Hua Zhou and Xu-Ying Liu. Training Cost-Sensitive Neural Networks with Methods Addressing the Class Imbalance Problem.

is apparently worse than that of sole BP. Table X and Fig. 4 also show that threshold-moving is always effective, and soft-ensemble only causes negative effect on the most seriously imbalanced data set annealing SMOTE and hardensemble cause negative effect on soybean and annealing. It is noteworthy that the sampling methods cause negative effect on almost all data sets suffering from class


Return to Annealing data set page.

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML