Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

Servo Data Set

Below are papers that cite this data set, with context shown. Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.

Return to Servo data set page.


Christopher J. Merz and Michael J. Pazzani. A Principal Components Approach to Combining Regression Estimates. Machine Learning, 36. 1999.

Attributes Numeric Source baseball 263 16 16 CMU bodyfat 252 14 14 CMU cpu 209 6 6 UCI dementia 118 26 26 UCI-MC hansch 111 13 0 QSAR housing 506 12 12 UCI imports 160 15 15 UCI servo 167 4 0 UCI data set from the UCI Medical Center. The imports data set had 41 examples with missing values which were not used due to limitations in one of the learning algorithms used. 6.2. Constituent Learners The set


H. Altay Guvenir and Ilhan Uysal. Regression on feature projections. a Department of Computer Engineering, Bilkent University. 1999.

q i 2 ^ t q i jj 13 In order to compare the RFP algorithm with KNN and Rules learning algorithms, we used abalone, auto-mpg, buying, country, cpu, electric, »are, housing, read and servo real world datasets for function approximation (available at http://funapp.cs.bilkent.edu.tr [11]). The information about the number of instances, number and type of features and presence of missing values are


Art B. Owen. Tubular neighbors for regression and classification. Stanford University. 1999.

and IB3 got 96:7% accuracy (5 errors). Both IB3 and the neural networks have parameters to set, while tubular neighbors makes an automatic choice based on cross-validation. 7.3 Servo data This data set is from the Irvine repository. The response is the rise time for a servo mechanism. There are two integer valued predictors taking 4 and 5 consecutive levels and two categorical predictors each


Mauro Birattari and Gianluca Bontempi and Hugues Bersini. Lazy Learning Meets the Recursive Least Squares Algorithm. NIPS. 1998.

considered. Dataset Housing Cpu Prices Mpg Servo Ozone Number of examples 506 209 159 392 167 330 Number of regressors 13 6 16 7 8 8 where ! i are weights than can be conveniently used to discount each error according


D. Greig and Hava T. Siegelmann and Michael Zibulevsky. A New Class of Sigmoid Activation Functions That Don't Saturate. 1997.

a sequence of 100 networks was trained using different values of ° for each hidden node. For the auto-mpg, servo and Tecator data sets (3 hidden nodes) the ° values (0:5; 1:5; 2:5) were used, for the glass data set (6 hidden nodes), the values (0:5; 1:0; 1:5; 2:0; 2:5; 3:0) were used, and for the bodyfat data set (7 hidden nodes)


Georg Thimm and E. Fiesler. Optimal Setting of Weights, Learning Rate, and Gain. E S E A R C H R E P R O R T I D I A P. 1997.

Multilayer perceptrons behave similarly, as shown in figure 4, as confirmed by experiments performed with the Solar, Wine, Glass and Servo data sets. The most important difference with high order perceptrons is that the networks do not or only very slowly converge for weight variances close to zero. Such variances should therefore not be used


Georg Thimm and Emile Fiesler. IDIAP Technical report High Order and Multilayer Perceptron Initialization. IEEE Transactions. 1994.

itself has a large influence on the optimal initial weight variance: for the solar, wine, and servo data sets, the networks have about the same size for the same order, but the optimal value for the weight variance differs a lot for the network with the logistic 11 0.01 0.1 1 10E-4 0.001 10E-5 10E-6


Jianping Wu and Zhi-Hua Zhou and Cheng-The Chen. Ensemble of GA based Selective Neural Network Ensembles. National Laboratory for Novel Software Technology Nanjing University.

GASEN's generalization error is significantly lower than that of the simple ensemble method, and e-GASEN attains still lower generalization errors than GASEN. On the Servo data set, GASEN is slightly inferior to simple ensemble. The e-GASEN method's performance, however, has no significant difference with that of the simple ensemble method. From the aforementioned statistics


Dorian Suc and Ivan Bratko. Combining Learning Constraints and Numerical Regression. National ICT Australia, Sydney Laboratory at UNSW.

are AutoMpg, AutoPrice, Housing, MachineCpu and Servo The other three data sets are from dynamic domains where QUIN has typically been applied so far [Suc, 2003; Suc and Bratko, 2002] . It should be noted that in these domains the primary objective was to explain the


Georg Thimm and Emile Fiesler. High Order and Multilayer Perceptron Initialization.

itself has a large influence on the optimal initial weight variance: for the solar, wine, and servo data sets, the networks have about the same size for the same order, but the optimal value for the weight variance differs a lot for the network with the logistic activation function. Further, the optimal


Return to Servo data set page.

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML