Indian Journal of Science and Technology
DOI: 10.17485/ijst/2015/v8i33/71203
Year: 2015, Volume: 8, Issue: 33, Pages: 1-4
Original Article
K. Fathima Bibi1* and M. Nazreen Banu2
1 Department of Computer Science, Bharathiar University, Coimbatore – 641046, Tamil Nadu, India; [email protected]
2 Department of MCA, MAM College of Engineering, Tiruchirappalli – 621105, Tamil Nadu, India; [email protected]
In data mining feature subset selection is a preprocessing step in classification that lessens dimensionality, eliminates unrelated data, increases accuracy and improves unambiguousness. The next step in classification is to produce enormous amount of rules from the reduced feature set from which high class rules are chosen to build effectual classifier. In this paper, Information Gain (IG) has been used to rank the features. Multi-Layer Perception (MLP) with back-propagation reduces features to achieve higher accuracy in classification. Artificial Neural Networks (ANN) classifier is used for classification. We handle the discretization of continuous valued features by dividing the series of values into a limited number of subsections. Wine Recognition data set taken from the UCI machine learning repository is used for testing. Original 13 features are drawn in classification. The thirteen features are reduced to five features. Experimental results show that the accuracy in training dataset is 98.62% and in the validation dataset is 96.06%. The accuracy difference between 13 features and 5 features in the training data is 5.54% and in validation data is 2.00%. We then build a Decision Tree and concentrate on discovering significant rules from the reduced data set that provide better classification.
Keywords: Back-Propagation, Classification, Decision Tree, Feature Subset Selection, Multi-Layer Perception
Subscribe now for latest articles and news.