Bias–variance is one of the tools to learn the performance of any machine learning algorithm. Various bias–variance models can be observed using regularized linear regression. In this paper, we implement regression on a sample dataset along with some optimization technique. Here, we use a technique called polynomial linear regression on the same dataset to increase the fit and the results will be normalized. Earlier studies show that though the polynomial linear regression is applied, the plot is very complex and it will drop-off at extremes at most of the situations. Regularization is one of the techniques used to optimize and narrower the gaps. These properties vary from one dataset to another and we find an optimistic value of parameter lambda. The error of overfitting from the plotted curve is significantly reduced. Again it is plotted between error obtained and the values of lambda. The main goal of this paper is to avoid the overfitting problem. Cross-validation set is taken and helped in estimation of error and deciding the parameters work best in the model.
M. Rajasekhar Reddy, B. Nithish Kumar, Madhusudana Rao Nalluri, and B. Karthikeyan, “A New Approach for Bias–Variance Analysis Using Regularized Linear Regression”, Advances in Bioinformatics, Multimedia, and Electronics Circuits and Signals. Springer Singapore, Singapore, pp. 35-46, 2020.