Indian Journal of Science and Technology
Year: 2016, Volume: 9, Issue: 48, Pages: 1-5
Rajesh Chandrakant Sanghvi1* and Himanshu B Soni2
1 Department of Applied Science and Humanities, G H Patel College of Engineering and Technology, Vallabh Vidyanagar – 388120, Gujarat, India; [email protected]
2 Department of Electronics and Communication, G H Patel College of Engineering and Technology, Vallabh Vidyanagar – 388120, Gujarat, India; [email protected]
Objectives: This article focuses on improving the convergence rate and reducing the number of operations used to train the Least Mean Square (LMS) algorithm. Methods/Statistical Analysis: In this paper, two modifications are suggested to train an adaptive filter using the LMS algorithm; one is based on initialization of weights and another on early termination of the training of a sequence. Findings: The optimum weights of an adaptive filter are found by initializing the weights by zeros, providing several random sequences as input and updating the weights according to the error. Moreover, the weights are continuously updated for the entire sequence even if the weights have been converged. In the proposed algorithm, the weights are initialized with zeros only once for the first sequence. The optimum weights obtained for a sequence are used as initial weights for the subsequent sequence to improve the convergence rate. Further, to reduce the number of operations, the weight update process for a sequence is terminated when the error is below some prescribed threshold. Applications/Improvements: Results shows that by making these modifications, the rate of convergence increases and number of multiplications decrease.
Keywords: Convergence Rate, Initialization of Weights, LMS Algorithm, Multiplications, Mean Square Error, Threshold
Subscribe now for latest articles and news.