Indian Journal of Science and Technology
DOI: 10.17485/ijst/2014/v7sp7.5
Year: 2020, Volume: 7, Issue: Supplementary 7, Pages: 61–65
Original Article
M. Sudha1* and A. Kumaravel2
1 Department of Mathematics, Amet University, Kanathur, Chennai-600112, India; seedinmenew@yahoo.com
2 Department of Computer Science and Engineering, Bharath University, Selaiyur, Chennai-600073, India; drkumaravel@gmail.com
Recent years have been wide efforts in attribute selection research. Attribute selection can efficiently reduce the hypothesis space by removing irrelevant and redundant attributes. Attribute reduction of an information system is a key problem in rough set theory and its applications. In this paper, we compare the performance of attribute selection using two technical tools namely WEKA 3.7 and ROSE2. Filter methods are used an alternative measure instead of the error rate to score a feature subset. This measure was chosen to be fast to compute, at the same time as still capturing the usefulness of the feature set. Many filters provide a feature ranking rather than an explicit best feature subset, and the cutoff point in the ranking is chosen via cross-validation. We used Search methods like Best first and Greedy stepwise to evaluate a subset of features as a group for suitability. We use the internet usage data set for this purpose and then comparison results are tabulated for various methods for searching the solution space to eliminate the irrelevant attribute. Results of this research shows us some minding issues of attribute selection tools where we found better ways to have select irrelevant attributes. Comparing the tools of attributes reductions evidence some considerable different between them.
Keywords: Classifications, Data Mining, Rough Set Explorer, Search Methods, Selected Attributes, WEKA
Subscribe now for latest articles and news.