Lenses Data Set
Below are papers that cite this data set, with context shown.
Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.
Return to Lenses data set page.
Bob Ricks and Dan Ventura. Training a Quantum Neural Network. NIPS. 2003.
an epoch refers to finding and fixing the weight of a single node. We also tried the randomized search algorithm for a few real-world machine learning problems: lenses Hayes-Roth and the iris datasets . The lenses data set is a data set that tries to predict whether people will need soft contact lenses, hard contact lenses or no contacts. The iris dataset details features of three different
Jeremy Kubica and Andrew Moore. Probabilistic Noise Identification and Data Cleaning. ICDM. 2003.
on the level of cells, where a single cell may be generated by a different model than cells of the corresponding attribute for the other records from this class. 5 Evaluation 5.1 Naturally Corrupted Data Sets As an initial test, we ran LENS on several real world data sets. These data sets contained "natural" corruptions that were not explicitly generated from the assumed models. 5.1.1 Leaf and Rock Data
Ke Wang and Shiyu Zhou and Ada Wai-Chee Fu and Jeffrey Xu Yu. Mining Changes of Classification by Correspondence Tracing. SDM. 2003.
from old ones. Our discussion focuses on forward change mining, but it is equally applicable to backward change mining with the roles of old and new rules exchanged. Example 3.1. We use the Lenses data set from the UCI repository  to illustrate our approach. There are four attributes, three classes, and 18 examples: Attributes: A 1 : Age: 1, 2, 3 A 2 : Spectacle Prescription: 1, 2 A 3 :
Jim Prentzas and Ioannis Hatzilygeroudis and Athanasios K. Tsakalidis. Updating a Hybrid Rule Base with New Empirical Source Knowledge. ICTAI. 2002.
taken from the UCI Repository of Machine Learning and Domain Theories . More specifically, we used the lenses dataset containing 24 examples/patterns of 9 component values and the tictac-toe dataset containing 958 examples/patterns of 27 component values. Additionally, we used three datasets of ours produced from a
Pedro Domingos. Knowledge Discovery Via Multiple Models. Intell. Data Anal, 2. 1998.
It may thus be possible to substantially optimize CMM's complexity without seriously affecting accuracy by choosing 7 Table 2: Empirical results: average accuracies and their standard deviations. Dataset CMM Bagging Single Lenses 75.0Sigma6.8 75.0Sigma6.8 62.5Sigma7.1 Lung cancer 40.0Sigma7.5 36.7Sigma7.2 31.7Sigma7.0 Soybean (small) 97.0Sigma1.6 97.0Sigma1.6 98.0Sigma1.4 Labor
J. Kent Martin and Daniel S. Hirschberg. Small Sample Statistics for Classification Error Rates I: Error Rate Measurements. Department of Information and Computer Science University of California, Irvine. 1996.
attributes. To explore the behavior of resampling estimators for non-numeric attributes in other inference environments, a series of experiments was conducted using the contact lens prescription data set . In this artificial problem, patients are classified into 3 categories (hard, soft, none) based on the values of 4 attributes (1 tertiary and 3 binary). The 24 instances given cover all cases
Christophe Giraud and Tony Martinez and Christophe G. Giraud-Carrier. University of Bristol Department of Computer Science ILA: Combining Inductive Learning with Prior Knowledge and Reasoning. 1995.
animal has four legs, then animal belongs to class 1 2. If animal has feathers, then animal belongs to class 2 3. If animal lays eggs, is aquatic, and has fins, then animal belongs to class 4 lenses dataset: 1. If patient has low tear production rate, then patient is not fit for contact lenses voting-84 dataset: 1. If representative voted 'no' on the 'physician-fee-freeze' issue, then rep. is a
Geoffrey I. Webb. OPUS: An Efficient Admissible Algorithm for Unordered Search. J. Artif. Intell. Res. (JAIR, 3. 1995.
the 24 CPU hour limit. may turn out to be a better choice than the other, leading to the exploration of fewer nodes. To test the plausibility of this explanation, OPUS o was run again on the Lenses data set with Step 8 altered to ensure that where two siblings have equal optimistic value they are ordered in the same order as was employed with no optimistic reordering. This resulted in the exploration
Christophe G. Giraud-Carrier and Tony Martinez. AN INCREMENTAL LEARNING MODEL FOR COMMONSENSE REASONING. Department of Computer Science Brigham Young University.
1. If animal has four legs, then animal is a mammal 2. If animal has feathers, then animal is a bird 3. If animal lays eggs, is aquatic, and has fins, then animal is a fish lenses dataset: 1. If patient has low tear production rate, then patient is not fit for contact lenses voting and voting3 datasets: 1. If representative voted 'no' on the 'physician-fee-freeze' issue, then
Anthony D. Griffiths and Derek Bridge. A Yardstick for the Evaluation of Case-Based Classifiers. Department of Computer Science, University of York.
giving 24 possible customer descriptions. These descriptions are classified into three classes according to the suitability of different types of contact lens The documentation for the data set indicates that a correct description of the target function requires 9 production rules. The distribution of the classification values is highly skewed, giving a relative frequency for the majority
Mehmet Dalkilic and Arijit Sengupta. A Logic-theoretic classifier called Circle. School of Informatics Center for Genomics and BioInformatics Indiana University.
from the UC Irvine Machine Learning repository . Full deterministic Circle was tested on the simple data sets like contact lenses and weather, as well as large data sets like monks, mushroom, and Zoo. As an example of the performance improvement, while full Circle took over an hour to terminate using the