Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact


Repository Web            Google
View ALL Data Sets

Statlog (Australian Credit Approval) Data Set

Below are papers that cite this data set, with context shown. Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.

Return to Statlog (Australian Credit Approval) data set page.


Jeroen Eggermont and Joost N. Kok and Walter A. Kosters. Genetic Programming for data classification: partitioning the search space. SAC. 2004.

used in the experiments data set records attributes classes Australian credit (statlog) 690 14 2 German credit (statlog) 1000 23 2 Pima Indians diabetes 768 8 2 Heart disease (statlog) 270 13 2 Ionosphere 351 34 2 Iris 150 4 3 .


Bart Hamers and J. A. K Suykens. Coupled Transductive Ensemble Learning of Kernel Models. Bart De Moor. 2003.

0.16 5.08e-4 0.27 0.28 7.92e-4 0.21 0.21 8.37e-4 0.15 0.14 5.08e-4 0.24 0.24 8.34e-4 1 0 1 9.8e-5 1 2.7e-3 Table 2: Misclassification rates on a test set (Tic-Tac-Toe (TTT), Australian Credit Card Data Set (ACR) and the Adult Data Set (ADULT)). The number of models is indicated by the second number in Table 1, example TTT11 is an ensemble model based on 11 individual models on, the TTT prediction. We


Xiaoming Huo. FBP: A Frontier-Based Tree-Pruning Algorithm. Seoung Bum Kim. 2002.

the mean difference of the CV error between CCP and FBP is (0.0770, 0.2196). As mentioned earlier, we treat this as a "sanity check". Table 4: Comparison of the CV Error Rates Between CCP and FBP Data Set CCP FBP Winner Australian Credit Approval 14.13 14.01 FBP Cleveland Heart Disease 21.15 20.89 FBP Congressional Voting Records 4.16 4.12 FBP Wisconsin Breast Cancer 4.56 4.47 FBP Iris Plants 5.20


Endre Boros and Peter Hammer and Toshihide Ibaraki and Alexander Kogan and Eddy Mayoraz and Ilya B. Muchnik. An Implementation of Logical Analysis of Data. IEEE Trans. Knowl. Data Eng, 12. 2000.

from the Irvine repository which were used in our experiments. Australian Credit Card. The dataset, submitted to the repository by J. Quinlan, consists of 690 records of MasterCard applicants, 307 of which are classified as positive and 383 as negative. While 37 records have some missing data,


Mark A. Hall. Department of Computer Science Hamilton, NewZealand Correlation-based Feature Selection for Machine Learning. Doctor of Philosophy at The University of Waikato. 1999.

has the single most predictive attribute (physician-fee-freeze) removed. Australian credit screening (cr) This dataset contains 690 instances from an Australian credit company. The task is to distinguish credit-worthy from non credit-worthy customers. There are 15 attributes whose names and values have been


Rudy Setiono and Huan Liu. NeuroLinear: From neural networks to oblique decision rules. Neurocomputing, 17. 1997.

Each neural network was given a set of initial weights randomly generated in the interval [Gamma 1; 1]. For all networks, the following 17 Table 6 Accuracy rates (%) of C4.5rules and Neurolinear. Dataset C4.5rules NeuroLinear P-value Australian Credit Approval 84.22 (2.93) 83.64 (5.74) 0.60 Boston Housing Data 83.81 (5.90) 80.60 (9.12) 0.28 Cleveland Heart Disease 75.45 (7.17) 78.15 (6.86) 0.24


Hussein A. Abbass. Pareto Neuro-Evolution: Constructing Ensemble of Neural Networks Using Multi-objective Optimization. Artificial Life and Adaptive Robotics (A.L.A.R.) Lab, School of Information Technology and Electrical Engineering, Australian Defence Force Academy.

tested MPANN on two benchmark problems; the Australian credit card assessment problem and the diabetes problem, available by anonymous ftp from ice.uci.edu [6]. The Australian credit card assessment dataset contains 690 patterns with 14 attributes; 6 numeric and 8 discrete (with 2 to 14 possible values). The predicted class is binary - 1 for awarding the credit and 0 for not. To be consistent with the


Krzysztof Grabczewski and Wl/odzisl/aw Duch. THE SEPARABILITY OF SPLIT VALUE CRITERION. Department of Computer Methods, Nicolaus Copernicus University.

analyzed in the Stalog project [6]. Results of the C4.5 decision tree are already significantly worse. 5.4 Statlog Australian credit data This dataset contains 690 cases classified in 2 classes (+ and -). Data vectors are described by 14 attributes (6 continuous and 8 discrete). In the Table 4 a comparison of 10 fold crossvalidation results for


Bart Baesens and Stijn Viaene and Tony Van Gestel and J. A. K Suykens and Guido Dedene and Bart De Moor and Jan Vanthienen and Katholieke Universiteit Leuven. An Empirical Assessment of Kernel Type Performance for Least Squares Support Vector Machine Classifiers. Dept. Applied Economic Sciences.

using different types of kernel functions. The data sets considered are Australian Credit (acr), Bupa Liver Disorders (bld), German Credit (gcr), Heart Disease (hea), Johns Hopkins Ionosphere (ion), Pima Indians Diabetes (pid), Sonar (snr), Tic-Tac-Toe


Adil M. Bagirov and Alex Rubinov and A. N. Soukhojak and John Yearwood. Unsupervised and supervised data classification via nonsmooth and global optimization. School of Information Technology and Mathematical Sciences, The University of Ballarat.

local optimization (Discrete gradient method, see Section 4). For testing the efficiency of the combination of k-means and Discrete Gradient method, we use four well-known medium-size test datasets: Australian credit dataset, Diabetes dataset, Liver disorder dataset and Vehicle dataset. The description of these datasets can be found in Appendix. We studied these datasets, using different


Wl/odzisl/aw Duch and Karol Grudzi nski and Grzegorz Stawski. SYMBOLIC FEATURES IN NEURAL NETWORKS. Department of Computer Methods, Nicolaus Copernicus University.

significantly better and they show similar trend, comparable to the other two classifiers, therefore they are not reported here. All data was taken from the UCI repository [8]. The Australian Credit dataset has 690 cases, each with 6 continuous and 8 symbolic values. There are 2 classes (majority rate is 55.5%), no missing values. 10-fold cross-validation test were run therefore the variance could be


Return to Statlog (Australian Credit Approval) data set page.

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML