Statlog (Image Segmentation) Data Set
Below are papers that cite this data set, with context shown.
Papers were automatically harvested and associated with this data set, in collaboration with Rexa.info.
Return to Statlog (Image Segmentation) data set page.
Anthony K H Tung and Xin Xu and Beng Chin Ooi. CURLER: Finding and Visualizing Nonlinear Correlated Clusters. SIGMOD Conference. 2005.
of three helix clusters with different cluster existence spaces, the iris plant dataset and the image segmentation dataset from the UCI Repository of Machine Learning Databases and Domain Theories , and the Iyer time series gene expression data with 10 well-known linear clusters
Xiaoli Z. Fern and Carla Brodley. Cluster Ensembles for High Dimensional Clustering: An Empirical Study. Journal of Machine Learning Research n, a. 2004.
(6 letters only) UCI ML archive mfeat Handwritten digits represented by Fourier coefficients (Blake and Merz, 1998) satimage StatLog Satellite image data set (training set) segmentation Image segmentation data In contrast, HBGF allows the similarity of instances and the similarity of clusters to be considered simultaneously in producing the final
Aristidis Likas and Nikos A. Vlassis and Jakob J. Verbeek. The global k-means clustering algorithm. Pattern Recognition, 36. 2003.
, the synthetic data set  and the image segmentation data set . In all data sets we conducted experiments for the clustering problems obtained by considering only feature vectors and ignoring class labels. The iris
Manoranjan Dash and Huan Liu and Peter Scheuermann and Kian-Lee Tan. Fast hierarchical clustering and its validation. Data Knowl. Eng, 44. 2003.
the estimated K 0 is close to K; e.g. for C5 type with M = 20, N = 10k, and K = 100, the estimated K 0 is 110. Results for high-dimensional benchmark datasets (e.g. Pendigit, image segmentation and character) also show similar close estimates. The reason is with increasing dimensionality the difference between intra-cluster and inter-cluster distances
Thomas T. Osugi and M. S. EXPLORATION-BASED ACTIVE MACHINE LEARNING. Faculty of The Graduate College at the University of Nebraska In Partial Fulfillment of Requirements.
Characteristics . . . . . . . . . . . . . . . . . . . . . . . 47 A.3 Image Segmentation Dataset Characteristics . . . . . . . . . . . . . . 48 A.4 Corel Dataset Characteristics . . . . . . . . . . . . . . . . . . . . . . 48 A.5 XOR Checkerboard d = 2, n = 2 Dataset Characteristics . . . . . .
Nikos A. Vlassis and Aristidis Likas. A greedy EM algorithm for Gaussian mixture. Intelligent Autonomous Systems, IAS.
this becomes clearer as the degree of separation c of the components increases. In the second part of the experiments we applied the greedy EM and the regular EM algorithm on an image segmentation data set available from the UCI repository . This data set contains 210 training patterns and 2100 test patterns of 19 features. Although this is a supervised data set, in the conducted experiments we
Amund Tveit. Empirical Comparison of Accuracy and Performance for the MIPSVM classifier with Existing Classifiers. Division of Intelligent Systems Department of Computer and Information Science, Norwegian University of Science and Technology.
As we can see from the results in figure 1, MIPSVM performs comparably well when it comes to classification accuracy for the Waveform and Image Segment datasets. For the Letter Recognition dataset it performs considerably worse than the other classifiers. This is likely to be caused by that MIPSVM doesn't have any balancing mechanisms one-against-the-rest
Je Scott and Mahesan Niranjan and Richard W. Prager. Realisable Classifiers: Improving Operating Performance on Variable Cost Problems. Cambridge University Department of Engineering.
for improving classification systems in problem domains within which classification costs may not be known apriori. Empirical results are presented for artificial data, and for two real world data sets: an image segmentation task and the diagnosis of abnormal thyroid condition. 1 Introduction A large fraction of decision support systems, particularly those used in medical diagnostics (e.g.
C. Titus Brown and Harry W. Bullen and Sean P. Kelly and Robert K. Xiao and Steven G. Satterfield and John G. Hagedorn and Judith E. Devaney. Visualization and Data Mining in an 3D Immersive Environment: Summer Project 2003.
be reduced to about three or four. 21 Figure 4.3: The graph of the ChiSquared analysis of the housing data set. 22 4.4 Image Segmentation This data set was analysed by Christian Brown. Overview The Image Segmentation (Seg) data set consisted of data relating numerous analyses of the colors in subdivided
Adil M. Bagirov and Alex Rubinov and A. N. Soukhojak and John Yearwood. Unsupervised and supervised data classification via nonsmooth and global optimization. School of Information Technology and Mathematical Sciences, The University of Ballarat.
some databases with known classes. We used the diabetes, liver disorder, heart disease, breast cancer, vehicles, synthetic, penbased recognition of handwritten digits (PBRHD) and image segmentation datasets in numerical experiments. Descriptions of these datasets can be found in Appendix. First, we normalized all features. This was done by a nonsingular matrix so that mean values of all features were
K. A. J Doherty and Rolf Adams and Neil Davey. Unsupervised Learning with Normalised Data and Non-Euclidean Norms. University of Hertfordshire.
Segmentation Training Data K L 0.1 L 0.5 L 1 L 2 L 4 L# 3 518 539 494 450 446 437 5 818 874 772 692 720 678 9 1345 1424 1249 1184 1167 1066 Using the UCI Ionosphere, WDBC and Image Segmentation data sets we performed a K -NN search. For each member of the data set of class c,where c # C (q.v. Table 2), the K-NNs are identified and a count maintained of those neighbours whose class was also c. Table
Adil M. Bagirov and John Yearwood. A new nonsmooth optimization algorithm for clustering. Centre for Informatics and Applied Optimization, School of Information Technology and Mathematical Sciences, University of Ballarat.
some databases with known classes. We used the diabetes, liver disorder, heart disease, breast cancer, vehicles, synthetic, pen-based recognition of handwritten digits (PBRHD) and image segmentation data sets in numerical experiments. Descriptions of these data sets can be found in . First, we normalized all features. This was done by a nonsingular matrix so that mean values of all features were 1.
K. A. J Doherty and Rolf Adams and Neil Davey. Non-Euclidean Norms and Data Normalisation. Department of Computer Science, University of Hertfordshire, College Lane.
We repeated the empirical test of NN search using both fractional and higher-order L r norms in  with the Ionosphere, Wisconsin Diagnostic Breast Cancer (WDBC), and Image Segmentation labelled data sets from the UCI Machine Learning Repository . We report only a single representative result set. Table I shows the results of the nearest neighbour search on the raw Image Segmentation data set,
Michael Lindenbaum and Shaul Markovitch and Dmitry Rusakov. Selective Sampling Using Random Field Modelling.
Pima Indians Diabetes dataset, Ionosphere dataset and Image Segmentation dataset, one synthetic dataset: Letters dataset and three artificial problems: Two-Spirals problem, Two-Gaussians problem and Multi-Gaussian problem. The
James Tin and Yau Kwok. Moderating the Outputs of Support Vector Machine Classifiers. Department of Computer Science Hong Kong Baptist University Hong Kong.
versa) and G goes infinite. For the estimate based on the unmoderated output, the value of G obtained is inferior to that based on the moderated output. 5.2 image segmentation Problem The second data set is the image segmentation data from the UCI machine learning repository. Each pattern has 19 continuous attributes and corresponds to a 3 # 3 region of an outdoor image. The problem is to