Publication Type:

Conference Paper

Source:

Intelligent Systems Technologies and Applications, Springer International Publishing, Cham (2016)

ISBN:

9783319230368

URL:

https://link.springer.com/chapter/10.1007/978-3-319-23036-8_25

Keywords:

Descent Method, Gradient Descent Gradient, Graphic Processor Unit, Ordinary Little Square, Stochastic Gradient Descent

Abstract:

Regression model is a well-studied method for the prediction of real-valued data. Depending on the structure of the data involved, different approaches have been adopted for estimating the parameters which includes the Linear Equation solver, Gradient Descent, Least Absolute Shrinkage and Selection Operator (LASSO) and the like. The performance of each of them varies based on the data size and computation involved. Many methods have been introduced to improve their performance like QR factorization in least squares problem. Our focus is on the analysis of performance of gradient descent and QR based ordinary least squares for estimating and updating the parameters under varying data size. We have considered both tall/skinny as well as short/fat matrices. We have implemented Block Householders method of QR factorization in Compute Unified Device Architecture (CUDA) platform using Graphics Processor Unit (GPU) GTX 645 with the initial set of data. New upcoming data is updated directly to the existing Q and R factors rather than applying QR factorization from the scratch. This updating-QR platform is then utilized for performing regression analysis. This will support the regression analysis process on-the-fly. The results are then compared against the gradient descent implemented. The results prove that parallel-QR method for regression analysis achieves speed-up of up to 22x compared with the gradient descent method when the attribute size is larger than the sample size and speed-up of up to 2x when the sample size is larger than the attribute size. Our implementation results also prove that the updating-QR method achieves speed-up approaching 2x over the gradient descent method for large datasets when the sample size is less than the attribute size.

Cite this Research Publication

Remya Rajesh and Namitha, K., “Performance Analysis of Updating-QR Supported OLS Against Stochastic Gradient Descent”, in Intelligent Systems Technologies and Applications, Cham, 2016.