Congressional Voting Records Data Set
Below are papers that cite this data set, with context shown.
Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.
Return to Congressional Voting Records data set page.
Aristides Gionis and Heikki Mannila and Panayiotis Tsaparas. Clustering Aggregation. ICDE. 2005.
completely parameter-free! Neither a threshold nor the number of clusters need to be specified. The number of clusters discovered by our algorithms seem to be very reasonable choices: for the Votes dataset most people vote according to the official position of their political parties, so having two clusters is natural; for the Mushrooms dataset, notice that both ROCK and LIMBO achieve much better
Julie Greensmith. New Frontiers For An Artificial Immune System. Digital Media Systems Laboratory HP Laboratories Bristol. 2003.
have been created from the #-informative words, and have been labelled, the classifier is all set and ready to go. 5 times 10 fold cross validation was performed in a similar manner to the voting dataset and the results collated. Standard statistical techniques including Wilcoxon Mann-Whitney ranking  and Students t-test are used in order to analyse the results. The Wilcoxon ranking was used
Daniel J. Lizotte and Omid Madani and Russell Greiner. Budgeted Learning of Naive-Bayes Classifiers. UAI. 2003.
problem with nine features that can take on between two and five values. The relative performances of the policies are closer to each other, but their behaviour is similar to Figure 4(b). The votes dataset (Figure 4(d)) is a binary class problem (democrat vs. republican), with 16 binary features, 435 instances, and a positive class probability of 0.61. In the votes dataset, there is a high proportion
Jonathan Eckstein and Peter L. Hammer and Ying Liu and Mikhail Nediak and Bruno Simeone. The Maximum Box Problem and its Application to Data Analysis. RUTCOR Rutgers Center for Operations Research Rutgers University. 2002.
1. Table 1 also indicates the number of observations for which some data values are missing. In our experiments, we removed the observations with missing values, with the exception of the voting data set, where almost half of the observations contained missing values. In this data set, there are sixteen attributes, and all of them are binary. In this case, we have substituted the missing binary
Daniel Barbar and Yi Li and Julia Couto. COOLCAT: an entropy-based algorithm for categorical clustering. CIKM. 2002.
0.487478 0.506039 0.499362 Expected entropy 13.9632 13.9585 13.9283 13.9079 - Running times(sec.) 0.16 0.26 0.28 0.29 0.51 Figure 10: Results for COOLCAT and ROCK in the Congressional Voting data set Metric COOLCAT, m ROCK 0% 10% (20) 20% (40) 40% (80) ¯ CU 7.089928 7.110608 7.090686 7.125781 6.882149 Entropy (edible) 0.023374 0.030934 0.023575 0.03177 0.012389 Expected entropy 9.8744 9.8551
Federico Divina and Elena Marchiori. Evolutionary Concept Learning. GECCO. 2002.
respectively. The three dataset are public domain datasets. The vote dataset contains votes for each of the U.S. House of Representatives Congressmen on the sixteen key votes. The problem is learning a concept for distinguishing
Robert M French and Nick Chater. Using Noise to Compute Error Surfaces in Connectionist Networks: A Novel Means of Reducing Catastrophic Forgetting. Neural Computation. 2002.
In order to further test the HPBP algorithm on a sequential learning task drawn from a realworld
database, we selected the 1984 Congressional Voting Records database from the UCI
repository (Murphy & Aha, 1992)
Gary M. Weiss and Haym Hirsh. A Quantitative Study of Small Disjuncts: Experiments and Results. Department of Computer Science Rutgers University. 2000.
validation and the results are therefore based on averages of the test set calculated over 10 runs. Unless specified otherwise, all results are based on C4.5 without pruning. 3 AN EXAMPLE: THE VOTE DATASET In order to illustrate the problem with small disjuncts and introduce a way of measuring this problem, we examine the concept learned by C4.5 from the Vote dataset. Figure 1 shows how the correctly
Chun-Nan Hsu and Hilmar Schuschel and Ya-Ting Yang. The ANNIGMA-Wrapper Approach to Neural Nets Feature Selection for Knowledge Discovery and Data Mining. Institute of Information Science. 1999.
reported the configuration of the network in detail. Among their configuration, the learning rate 2.0 and the range of initial weights 17 [Gamma 0:3; +0:3] are adopted in our experiments. Vote This dataset consists of the voting records of 435 congressmen on 16 issues in the 1984 congress, 2nd session. The votes are classified into ``yea'', ``nay'', and ``unknown''. The classification problem is to
Huan Liu and Rudy Setiono. Incremental Feature Selection. Appl. Intell, 9. 1998.
critical evaluation of intrinsic dimensionality algorithms. In E.S. Gelsema and Kanal L.N., editors, Pattern Recognition in Practice, pages 415--425. Morgan Kaufmann Publishers, Inc., 1980. 19 Vote dataset CPU time Percent of training samples 20.00 22.00 24.00 26.00 28.00 30.00 32.00 34.00 36.00 0.00 20.00 40.00 60.00 80.00 100.00 Mushroom dataset CPU time Percent of training samples 10.00 15.00 20.00
Blai Bonet and Hector Geffner. Learning Sorting and Decision Trees with POMDPs. ICML. 1998.
takes a few minutes on average and leaves a few thousand entries in the hash table. For the larger Votes dataset the run takes 24 minutes on average and leaves around 16000 entries in the hash table. During testing, whenever a new belief state b o a was generated that was not in the hash table, b o a was
Igor Kononenko and Edvard Simec and Marko Robnik-Sikonja. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell, 7. 1997.
Assistant-R and LFC achieve significantly better result (99.95% confidence level). This result confirms that RELIEFF estimates the quality of attributes better than the information gain. On the VOTE data set the naive Bayesian classifier is the worst, while both versions of Assistant are comparable to the rule based classifier by Smyth et al. . The most interesting results appear in the MESH
Eui-Hong Han and George Karypis and Vipin Kumar and Bamshad Mobasher. Clustering Based On Association Rule Hypergraphs. DMKD. 1997.
the cluster which has the highest score with respect to that transaction. We performed clustering of transactions on 1984 United States Congressional Voting Records Database provided by [MM96]. The data set includes 435 transactions each corresponding to one Congressman's votes on 16 key issues. We removed class values from each transaction, and we followed the steps specified in Section 2.1 to
Erin J. Bredensteiner and Kristin P. Bennett. Feature Minimization within Decision Trees. National Science Foundation. 1996.
is linearly separable. 1984 United States Congressional Voting Records Database This data set includes votes for each of the 435 U.S. House of Representatives Congressmen. There are 267 democrats and 168 republicans. The chosen attributes represent 16 key votes. Possible values for the
Ron Kohavi and George H. John and Richard Long and David Manley and Karl Pfleger. MLC++: A Machine Learning Library in C. ICTAI. 1994.
n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u n y u Figure 4: The vote dataset (top), the concept induced by ID3 (center), and the concept induced by HOODG (bottom). 13 Body Shape Head Shape Is Smiling yes no yes no yes no yes no yes no yes no yes no yes no yes no round squar
Chotirat Ann and Dimitrios Gunopulos. Scaling up the Naive Bayesian Classifier: Using Decision Trees for Feature Selection. Computer Science Department University of California.
9 attributes, 2 classes. Attributes selected by SBC = 4. Congressional Voting Records 80 85 90 95 100 10203040506070809099 Training Data (%) Accuracy (%) NBC SBC C4.5 Figure 11. Congressional Voting dataset 435 instances, 16 attributes, 2 classes. Attributes selected by SBC = 3. To see a clearer picture on the SBC performance, we did an experiment on the same set of data using the Augmented Bayesian
Rudy Setiono and Huan Liu. Neural-Network Feature Selector. Department of Information Systems and Computer Science National University of Singapore.
1990 . For our experiment, 315 samples were randomly selected for training, 35 samples were selected for cross-validation, and 349 for testing. 2. United States Congressional voting records Dataset. The dataset consists of the voting records of 435 congressmen on 16 major issues in the 98th Congress. The votes are classified into one of the three different types of votes: yea, nay, and
Igor Kononenko and Edvard Simec. Induction of decision trees using RELIEFF. University of Ljubljana, Faculty of electrical engineering & computer science.
(table 6) indicates that the attributes are irrelevant to the class. On the VOTE data set the naive Bayesian classifier is the worst, while both versions of Assistant are comparable to the rule based classifier by Smyth et al. (1990). The most interesting results appear in the MESH
Daniel J. Lizotte. Library Release Form Name of Author. Budgeted Learning of Naive Bayes Classifiers.
75 times, while a non-discriminative feature such as feature 18 is bought an average of only 2 times. For some budgets, the 0/1 error of SFL is nearly half that generated by Round-Robin. The votes dataset (Figures 3.7(a) and 3.7(b)) is a binary class problem (democrat vs. republican), with 16 binary features, 435 instances, and a positive class probability of 0.61. In the votes dataset, there is a
Daniel J. Lizotte and Omid Madani and Russell Greiner. Budgeted Learning, Part II: The Na#ve-Bayes Case. Department of Computing Science University of Alberta.
problem with nine features that can take on between two and five values. The relative performances of the policies are closer to each other, but their behaviour is similar to Figure 4(a). The votes dataset (Figure 4(c)) is a binary class problem (whether or not republican), with 16 binary features, 435 instances, and a positive class probability of 0:61. In the votes dataset, there is a high