Chairperson, Mathematics, School of Engineering, Coimbatore

Professor, Mathematics, School of Engineering, Coimbatore

Dr. Ravichandran currently serves as Chairperson and Professor in the Department of Mathematics, School of Engineering, and Coimbatore.

Earlier, he served in the Statistical Quality Control Department of a manufacturing industry for more than 12 years. His research and subject areas of interest include statistical quality control, statistical inference, Six Sigma, Total Quality Management, Statistical Inference, Statistical Pattern Recognition and SPSS.

He has published two books and a number of papers on Six Sigma and Statistical Inference in international journals and websites. His papers have got more than 150 citations.

He is regularly reviewing research papers for the international journals such as The TQM Journal (Emerald Publications) and Total Quality Management and Business Excellence (Taylor and Francis – Routledge). He has guided one Ph.D. Scholar.

He has organized a two-day National Conference on Quality Improvement Concepts and their Implementation in Higher Educational Institution (QICIHEI – 2009).

Dr. Ravichandran served as resource person in many workshops and seminars (on Six Sigma, Statistics and SPSS) organized by various educational institutions and universities.

He was a Senior Member of American Society for Quality (1987 – 2007) and is a Life Member of Indian Society for Technical Education (ISTE)

Year of Publication | Title |
---|---|

2021 |
T. J. Devika and Dr. Ravichandran J., “A clustering method combining multiple range tests and K-means”, Communications in Statistics - Theory and Methods, pp. 1-56, 2021.[Abstract] This paper explores possibilities of applying multiple comparison tests (MCTs) that are commonly used in statistics to group the means once the analysis of variance (ANOVA) procedure rejects the hypothesis that all the means are equal. It is proposed here to apply MCT procedure to perform clustering when the data are repetitive and multidimensional. Since MCT procedure may result in overlapping clusters, we further develop an approach to first form initial clusters and then apply K -means procedure to construct non overlapping clusters. It may be noted that the choice of initial clusters for K -means procedure is still ambiguous. Accordingly, the paper is presented in a sequence covering (i) an algorithm for step-by-step implementation of K -means procedure for clustering, (ii) an algorithm for step-by-step implementation of MCT procedure for clustering and (iii) an algorithm for step-by-step implementation of a combined procedure to resolve the overlapping clusters. Numerical examples including an open data set are considered to demonstrate the algorithms and also to study their performance in terms of total mean square errors. More »» |

2020 |
Niveditha A. and Dr. Ravichandran J., “Six Sigma quality evaluation of life test data based on Weibull distribution”, International Journal of Quality & Reliability Management, 2020.[Abstract] Purpose While Six Sigma metrics have been studied by researchers in detail for normal distribution-based data, in this paper, we have attempted to study the Six Sigma metrics for two-parameter Weibull distribution that is useful in many life test data analyses. Design/methodology/approach In the theory of Six Sigma, most of the processes are assumed normal and Six Sigma metrics are determined for such a process of interest. In reliability studies non-normal distributions are more appropriate for life tests. In this paper, a theoretical procedure is developed for determining Six Sigma metrics when the underlying process follows two-parameter Weibull distribution. Numerical evaluations are also considered to study the proposed method. Findings In this paper, by matching the probabilities under different normal process-based sigma quality levels (SQLs), we first determined the Six Sigma specification limits (Lower and Upper Six Sigma Limits- LSSL and USSL) for the two-parameter Weibull distribution by setting different values for the shape parameter and the scaling parameter. Then, the lower SQL (LSQL) and upper SQL (USQL) values are obtained for the Weibull distribution with centered and shifted cases. We presented numerical results for Six Sigma metrics of Weibull distribution with different parameter settings. We also simulated a set of 1,000 values from this Weibull distribution for both centered and shifted cases to evaluate the Six Sigma performance metrics. It is found that the SQLs under two-parameter Weibull distribution are slightly lesser than those when the process is assumed normal. Originality/value The theoretical approach proposed for determining Six Sigma metrics for Weibull distribution is new to the Six Sigma Quality practitioners who commonly deal with normal process or normal approximation to non-normal processes. The procedure developed here is, in fact, used to first determine LSSL and USSL followed by which LSQL and USQL are obtained. This in turn has helped to compute the Six Sigma metrics such as defects per million opportunities (DPMOs) and the parts that are extremely good per million opportunities (EGPMOs) under two-parameter Weibull distribution for lower-the-better (LTB) and higher-the-better (HTB) quality characteristics. We believe that this approach is quite new to the practitioners, and it is not only useful to the practitioners but will also serve to motivate the researchers to do more work in this field of research. More »»six-sigma-quality-evaluation-life-test-data-based-weibull-distribution.pdf |

2020 |
S. Kalaivani and Dr. Ravichandran J., “Performance evaluation of exponential distribution using Six Sigma-based tail probabilities”, Communications in Statistics - Simulation and Computation, pp. 1-21, 2020.[Abstract] While it is customary and simple to deal with normality assumption of the data on hand, such assumption may lead to inaccurate outcomes if the underlying distribution is non-normal. Six Sigma analysis of any process is, in general, based on normality assumption irrespective of the nature of the original data. There are studies where non-normal data is transformed into normal data before performing Six Sigma analysis. In this paper, we have proposed to study the Six Sigma metrics for life test data that follow exponential distribution by matching the Six Sigma-based tail probabilities. Both centered and shifted cases of the exponential data are considered. Further, we considered higher-the-better and lower-the-better type product specifications to determine defects per million opportunities (DPMO) and extremely good units per million opportunities (EGPMO) based on the exponential distribution. Extensive numerical computations are done in addition to simulations to illustrate how DPMO and EGPMO are determined for centered or shifted exponential distribution. Some comparisons are also done with the existing approaches. More »» |

2019 |
Dr. Ravichandran J., “Six Sigma metrics based on lognormal distribution for life tests”, International Journal of Quality & Reliability Management, vol. 36, no. 9, pp. 1477 - 1489, 2019.[Abstract] Purpose The purpose of this paper is to propose an approach for studying the Six Sigma metrics when the underlying distribution is lognormal.Design/methodology/approach The Six Sigma metrics are commonly available for normal processes that are run in the long run. However, there are situations in reliability studies where non-normal distributions are more appropriate for life tests. In this paper, Six Sigma metrics are obtained for lognormal distribution.Findings In this paper, unlike the normal process, for lognormal distribution, there are unequal tail probabilities. Hence, the sigma levels are not the same for left-tail and right-tail defects per million opportunities (DPMO). Also, in life tests, while left-tail probability is related to DPMO, the right tail is considered as extremely good PMO. This aspect is introduced and based on which the sigma levels are determined for different parameter settings and left- and right-tail probability combinations. Examples are given to illustrate the proposed approach.Originality/value Though Six Sigma metrics have been developed based on a normality assumption, there have been no studies for determining the Six Sigma metrics for non-normal processes, particularly for life test distributions in reliability studies. The Six Sigma metrics developed here for lognormal distribution is new to the practitioners, and this will motivate the researchers to do more work in this field of research. More »» |

2019 |
Dr. Ravichandran J., “A review of specification limits and control limits from the perspective of Six Sigma quality processes”, International Journal of Six Sigma and Competitive Advantage , vol. 11, 1 vol., pp. Pages: 58 - 72, 2019.[Abstract] While specification limits represent voice of the customer, control limits address voice of the process. Specification limits and control limits are treated as independent entities until modified control limits are proposed which stress the need for control limits to keep satisfying specification requirements as well. High quality processes always attempt to provide products right the first time where specification requirements are checked for controlled processes. This calls for a strong link between control limits and specification limits. A house of quality-like representation can be used to describe such a link. It is preferred to intertwine the features of specification limits with the construction of control limits to achieve right the first time. In this paper, a critical review on specification limits and control limits is made from the perspective of Six Sigma quality and the need for a strong bond between the two is stressed. More »» |

2019 |
Dr. Ravichandran J., “Self-starting X-bar control chart based on Six Sigma quality and sometimes pooling procedure”, Journal of Statistical Computation and Simulation, vol. 89, no. 2, pp. 362-377, 2019.[Abstract] Self-starting control charts have been proposed in the literature to allow process monitoring when only a small amount of relevant data is available. In fact, self-starting charts are useful in monitoring a process quickly, without having to collect a sizable Phase I sample for estimating the in-control process parameters. In this paper, a new self-starting control charting procedure is proposed in which first an effective initial sample is chosen from the perspective of Six Sigma quality, then the successive sample means are either pooled or not pooled (sometimes pooling procedure) for computing next Q-statistics depending upon its signal. It is observed that the sample statistics obtained so from this in-control Phase I situation can serve as more efficient estimators of unknown parameters for Phase II monitoring. An example is considered to illustrate the construction of the proposed chart and to compare its performance with the existing ones. More »» |

2019 |
Dr. Ravichandran J., “Transistion Probabilities and ARL performance of Six Sigma zone control charts”, Communications in Statistics - Theory and Methods, vol. 48, no. 16, pp. 3976 - 3991, 2019.[Abstract] In this article, Six Sigma zone control charts (SSZCCs) are proposed for world class organizations. The transition probabilities are obtained using the Markov chain approach. The Average Run Length (ARL) values are then presented. The ARL performance of the proposed SSZCCs and the standard Six Sigma control chart (SSCC) without zones or run rules is studied. The ARL performance of these charts is then compared with those of the other standard zone control charts (ZCCs), the modified ZCC and the traditional Shewhart control chart (SCC) with common run rules. As expected, it is shown that the proposed SSZCC outperforms the standard SSCC without zones or run rules for process shifts of any magnitude. When compared to the other standard ZCCs and the Shewhart chart with common run rules, it is observed that the proposed SSZCCs have much higher false alarm rates for smaller shifts and hence they prevent unwanted process disturbances. The application of the proposed SSZCC is illustrated using a real time example. © 2018, © 2018 Taylor & Francis Group, LLC. More »» |

2017 |
Dr. Ravichandran J., “Quality Paper Control chart for high-quality processes based on Six Sigma quality”, International Journal of Quality & Reliability Management, vol. 34 , pp. 2-17, 2017. |

2017 |
Dr. Ravichandran J., “Six Sigma - Based Range And Standard Deviation Charts For Continuous Quality Improvement”, International Journal for Quality Research, vol. 11, no. 3, pp. 525-542, 2017.[Abstract] Introduced by Shewhart, the traditional variable control chart for mean (X-bar Chart) is an effective tool for controlling and monitoring processes. Notwithstanding, the main disadvantage of X-bar chart is that the population standard deviation is unknown though the sample mean is an unbiased estimator of the population mean. There are many approaches to estimating the unknown standard deviation with the expertise available with the researchers and practitioners that may lead to varying conclusions. In this paper, an innovative approach is introduced to estimate the population standard deviation from the perspective of Six Sigma quality for the construction of the proposed control chart for mean called Six Sigma-based X-bar control chart. Under the assumption that the process is normal, in the proposed chart the population mean and standard deviation are drawn from the process specification from the perspective of Six Sigma quality. After discussing the aspects of the traditional X-bar control chart, the procedure for the construction of the proposed new Six Sigma-based X-bar control chart is presented. The new chart is capable of maintaining the process mean close to the target by variance reduction resulting in quality improvement. Also, it may be noted that at a point of time, the process, though under statistical control, may be maintaining a particular sigma quality level only while the goal is Six Sigma quality level of just 3.4 defects per million opportunities. Hence, as a practice of continuous quality improvement, it is suggested to use the proposed control chart every time with improvement till the goal of Six Sigma with 3.4 defects per million opportunities is achieved. An innovative cyclic approach for performing the continuous quality improvement activity is also presented. The construction of the proposed Six Sigma-based X-bar control chart is demonstrated using an illustrative example. More »» |

2017 |
Dr. Ravichandran J., “A Note on Determination of Sample Size from the Perspective of Six Sigma Quality”, Journal of Modern Applied Statistical Methods, vol. 16, no. 1, 2017.[Abstract] In most empirical studies (clinical, network modeling, and survey-based and aeronautical studies, etc.), sample observations are drawn from population to analyze and draw inferences about the population. Such analysis is done with reference to a measurable quality characteristic of a product or process of interest. However, fixing a sample size is an important task that has to be decided by the experimenter. One of the means in deciding an appropriate sample size is the fixation of error limit and the associated confidence level. This implies that the analysis based on the sample used must guarantee the prefixed error and confidence level. Although there are methods to determine the sample size, the most commonly used method requires the known population standard deviation, the preset error and the confidence level. Nevertheless, such methods cannot be used when the population standard deviation is unknown. Because the sample size is to be determined, the experimenter has no clue to obtain an estimate of the unknown population standard deviation. A new approach is proposed to determine sample size using the population standard deviation estimated from the product or process specification from the perspective of Six Sigma quality with a goal of 3.4 defects per million opportunities (DPMO). The aspects of quality improvement through variance reduction are also presented. The method is effectively described for its use and is illustrated with examples. More »»A-Note-on-Determination-of-Sample-Size-from-the-Perspective-of-Six-Sigma-Quality.pdf |

2016 |
Dr. Ravichandran J., “Six Sigma-based X-bar Control chart for Continuous Quality Improvement”, International Journal of Quality Research, vol. 10, no. 2, pp. 257 – 266, 2016.[Abstract] Introduced by Shewhart, the traditional variable control chart for mean (X-bar Chart) is an effective tool for controlling and monitoring processes. Notwithstanding, the main disadvantage of X-bar chart is that the population standard deviation is unknown though the sample mean is an unbiased estimator of the population mean. There are many approaches to estimating the unknown standard deviation with the expertise available with the researchers and practitioners that may lead to varying conclusions. In this paper, an innovative approach is introduced to estimate the population standard deviation from the perspective of Six Sigma quality for the construction of the proposed control chart for mean called Six Sigma-based X-bar control chart. Under the assumption that the process is normal, in the proposed chart the population mean and standard deviation are drawn from the process specification from the perspective of Six Sigma quality. After discussing the aspects of the traditional X-bar control chart, the procedure for the construction of the proposed new Six Sigma-based X-bar control chart is presented. The new chart is capable of maintaining the process mean close to the target by variance reduction resulting in quality improvement. Also, it may be noted that at a point of time, the process, though under statistical control, may be maintaining a particular sigma quality level only while the goal is Six Sigma quality level of just 3.4 defects per million opportunities. Hence, as a practice of continuous quality improvement, it is suggested to use the proposed control chart every time with improvement till the goal of Six Sigma with 3.4 defects per million opportunities is achieved. An innovative cyclic approach for performing the continuous quality improvement activity is also presented. The construction of the proposed Six Sigma-based X-bar control chart is demonstrated using an illustrative example. More »» |

2016 |
Dr. Ravichandran J., “Estimation of DPMO and EGPMO for Higher-the-Better and Lower-the-better Quality Characteristics for Quality Evaluation”, Total Quality Management & Business Excellence, vol. 27, no. 10, pp. 1112 - 1120, 2016.[Abstract] Whenever specification is provided for a measureable quality characteristic, under normality assumption, the defects per million opportunities (DPMO) can be computed by taking into account both the tail probabilities. This implies that a unit of a product is declared defective if a measurement on quality characteristic falls either below the specified lower limit or above the upper limit. However, there are practical situations, where a unit is said to be defective if the measurement either falls below the lower limit (higher-the-better) or falls above the upper limit (lower-the-better). Under this circumstance, the DPMO needs to be computed taking into account the appropriate tail probabilities (left or right). In this paper, the aspects of Six Sigma are used to decide the upper and lower sigma quality limits for these two cases. Followed by this, the DPMO is estimated accordingly. Probabilities of far good units, that is, extremely good parts per million opportunities (EGPMO), of the quality characteristic are also determined. It is discussed about how DPMO and EGPMO help in evaluating the overall quality level of the process/product of interest. The procedure is illustrated with numerical examples. More »» |

2015 |
Dr. Palanisamy T. and Dr. Ravichandran J., “Variance Estimation in Heteroscedastic Models by Undecimated Haar Transform”, Communications in Statistics-Simulation and Computation, vol. 44, pp. 1532–1544, 2015.[Abstract] We propose a method in order to maximize the accuracy in the estimation of piecewise constant and piecewise smooth variance functions in a nonparametric heteroscedastic fixed design regression model. The difference-based initial estimates are obtained from the given observations. Then an estimator is constructed by using iterative regularization method with the analysis-prior undecimated three-level Haar transform as regularizer term. We notice that this method shows better results in the mean square sense over an existing adaptive estimation procedure considering all the standard test functions used in addition to the functions that we target. Some simulations and comparisons with other methods are conducted to assess the performance of the proposed method. More »» |

2015 |
Dr. Palanisamy T. and Dr. Ravichandran J., “A wavelet-based hybrid approach to estimate variance function in heteroscedastic regression models”, Statistical Papers, vol. 56, pp. 911-932, 2015.[Abstract] We propose a wavelet-based hybrid approach to estimate the variance function in a nonparametric heteroscedastic fixed design regression model. A data-driven estimator is constructed by applying wavelet thresholding along with the technique of sparse representation to the difference-based initial estimates. We prove the convergence of the proposed estimator. The numerical results show that the proposed estimator performs better than the existing variance estimation procedures in the mean square sense over a range of smoothness classes. © 2014, Springer-Verlag Berlin Heidelberg. More »» |

2015 |
Dr. Ravichandran J., “Control Chart for High Quality Processes Based on Six Sigma Quality”, International Journal of Quality and Reliability Management, 2015. |

2014 |
Dr. Palanisamy T. and Dr. Ravichandran J., “Estimation of variance function in heteroscedastic regression models by generalized coiflets”, Communications in Statistics: Simulation and Computation, vol. 43, pp. 2213-2224, 2014.[Abstract] A wavelet approach is presented to estimate the variance function in heteroscedastic nonparametric regression model. The initial variance estimates are obtained as squared weighted sums of neighboring observations. The initial estimator of a smooth variance function is improved by means of wavelet smoothers under the situation that the samples at the dyadic points are not available. Since the traditional wavelet system for the variance function estimation is not appropriate in this situation, we demonstrate that the choice of the wavelet system is significant to have better performance. This is accomplished by choosing a suitable wavelet system known as the generalized coiflets. We conduct extensive simulations to evaluate finite sample performance of our method. We also illustrate our method using a real dataset. Copyright © 2014 Taylor & Francis Group, LLC. More »» |

2012 |
Dr. Ravichandran J. and Dr. Palanisamy T., “Nonparamteric Regression Curve Estimation Using Generalized Coiflets”, Bulletin of Calcutta Mathematical Society, vol. 104, no. 2, pp. 139-146, 2012. |

2012 |
Dr. Ravichandran J., “A review of preliminary test-based statistical methods for the benefit of Six Sigma quality practitioners”, Statistical Papers, vol. 53, pp. 531-547, 2012.[Abstract] Ever since Professor Bancroft developed inference procedures using preliminary tests there has been a lot of research in this area by various authors across the world. This could be evidenced from two papers that widely reviewed the publications on preliminary test-based statistical methods. The use of preliminary tests in solving doubts arising over the model parameters has gained momentum as it has proven to be effective and powerful over to that of classical methods. Unfortunately, there has been a downward trend in research related to preliminary tests as it could be seen from only few recent publications. Obviously, the benefits of preliminary test-based statistical methods did not reach Six Sigma practitioners as the concept of Six Sigma just took off and it was in a premature state. In this paper, efforts have been made to present a review of the publications on the preliminary test-based statistical methods. Though studies on preliminary test-based methods have been done in various areas of statistics such as theory of estimation, hypothesis testing, analysis of variance, regression analysis, reliability, to mention a few, only few important methods are presented here for the benefit of readers, particularly Six Sigma quality practitioners, to understand the concept. In this regard, the define, measure, analyze, improve and control methodology of six sigma is presented with a link of analyze phase to preliminary test-based statistical methods. Examples are also given to illustrate the procedures. © 2010 Springer-Verlag. More »» |

2011 |
Dr. Ravichandran J., “A chi-square-based heuristic statistical procedure for approximating bid price in a competitive marketplace: A case study”, Journal of Business and Industrial Marketing, vol. 27, pp. 69-76, 2011.[Abstract] Purpose: The purpose of this paper is to propose a chi-square-based heuristic statistical procedure that considers the most recent bid prices of self and competitors in proposing an optimum winning bid price. The use of simple regression method to predict the adjusted optimum bid price is also considered. Design/methodology/approach: In order to achieve this objective, the proposed approach uses past bid prices of the case company and its competitors. A heuristic chi-square statistical procedure is then developed to obtain the expected bid prices. The absolute difference between the prices is obtained and the same is adjusted to get the optimal bid price for the forthcoming bid. Findings: It is demonstrated that the proposed heuristic chi-square-based approach is superior to that of the regression method, which is otherwise in practice in the case company, since the latter still gets support from the former. Further, the paper throws light on how a familiar statistical method could be conveniently but effectively applied as a tactic of marketing management. Research limitations/implications: The method is developed based on the experience of the case company in bid participation. Therefore, the study on the performance of the application of the proposed method from the perspective of different types of companies can be taken up as a future work. Also, if appropriate, the time and other factors may be considered as other independent factors along with the tender quantity in predicting the tender price. Practical implications: The proposed heuristic chi-square procedure is simple to implement as the required data on bid prices are usually available once the bids are open. The case study reveals how the method can be successfully implemented to win bids in a competitive market place. Also, the procedure is more generic so that it can be adapted as it is or with slight modification as desired by companies of any nature participating in bids. Originality/value: The proposed approach would have a high value among manufacturing firms and marketing managers who participate in tenders with an aim to increase sales turnover which in turn will increase profit. The paper is unique of its kind and will help researchers to think of either extending or proposing such new approaches as they need. © Emerald Group Publishing Limited. More »» |

2008 |
Dr. Ravichandran J., “Finding the level of customer complaints”, ISIXSIGMA, 2008.[Abstract] In the beginning of a deployment, many companies set a goal of 3.4 defects per million opportunities (DPMO) using Six Sigma quality concepts in production, and later extend this concept to other operational areas. Fewer companies, however, have extended Six Sigma from a manufacturing application to manage customer satisfaction or customer complaints. But it is possible to measure the sigma level of customer complaints, and this information can be valuable when making improvements in that area. More »» |

2008 |
Dr. Ravichandran J., “The performance of treatments in single-factor experiments using Six Sigma metrics”, International Journal of Six Sigma and Competitive Advantage, vol. 4, pp. 114-132, 2008.[Abstract] Whether the specification of a process parameter of interest (response) is available or not, experimenters always try to optimise the process by moving the mean close to the target value and simultaneously reducing the variance. The Analysis of Variance (ANOVA) is a powerful method and is useful to know whether the treatment means are significantly different or not. As a part of post-ANOVA analysis, when the hypothesis of equal means is rejected, various types of Multiple Comparison Tests (MCTs) are available to classify the means into different homogeneous groups. Also, in order to identify the significant treatment levels, the Analysis of Means (ANOM), a graphical approach, is found to be useful as it serves as an alternative to ANOVA. This paper concentrates on the analysis of the actual performance of the treatment means from the Six Sigma practitioners' point of view. Therefore, a new Six Sigma-based approach is proposed to study the performance of the treatment levels in single-factor experiments using Six Sigma performance metrics such as 'mean shifts' and 'sigma quality levels'. A new graphical display that uses mean shifts and sigma quality levels is also developed for easy interpretation. This approach will help the experimenters, particularly Six Sigma practitioners, decide on the treatment level(s) that can give the most efficient response. A number of examples are considered for illustration. © 2008, Inderscience Publishers. More »» |

2007 |
Dr. Ravichandran J., “Cost based process weights for DPMO and the overall sigma level of an organization”, TQM Magazine, vol. 19, 5 vol., pp. 442-453, 2007.[Abstract] Purpose Design/methodology/approach Findings Research limitations/implications Practical implications Originality/value |

2007 |
Dr. Ravichandran J., “Cost‐based Process Weights for DPMO and the Overall Performance of an Organization”, The TQM Magazine, vol. 19, pp. 442-453, 2007.[Abstract] Purpose – The purpose of this paper is to propose a new approach in which cost‐based process weights are used to determine a unique weighted‐defects per million opportunity (DPMO) and its corresponding overall sigma level in order to classify an organization as either “world‐class,” “industry average” or “non‐competitive.” Design/methodology/approach – In order to achieve this objective, the proposed approach uses both internal and external performances of the products and processes in terms of costs involved to determine cost‐based process weights. These weights are then incorporated into the respective DPMOs for computing weighted‐DPMOs. Finally, a unique weighted‐DPMO and its corresponding sigma level are found. Findings – The proposed method is a new one and it involves various costs for determining process weights. The findings reveal that the weight‐based overall sigma level is more realistic than the one that is calculated without weights. Further, the results of this study could provide interesting feedback to six‐sigma practitioners, as they are particular about DPMOs and return on investments in project implementations. Research limitations/implications – The results of this paper are based on the weights of respective processes and their products that are calculated using various cost aspects. Determining such weights by means of any other process and product factors incorporating the effects of various marketing activities, if any, could extend its generality and fulfil the gap. Practical implications – The proposed method is simple to implement and the required data can be collected without any additional commitments. Also, it is more generic so that it can be adapted by organizations of any nature. This paper recommends change in the practice from simply using the DPMOs with equal importance to using the weight‐based DPMOs for evaluating overall sigma level (performance) of an organization. Originality/value – The proposed approach would have a high value among six‐sigma quality practitioners and researchers as it provides a new and more realistic measure for overall performance of an organization during the evaluation process. More »» |

2006 |
Dr. Ravichandran J., “Using Weighted-DPMO to Calculate an Overall Sigma Level”, 2006.[Abstract] An organization is always successful in its journey towards business excellence if it has specific goals for each of its critical processes. Various quality management programs adopted and implemented by such an organization can streamline the activities towards the ultimate goal of business excellence. When the quality improvement program is Six Sigma, the goal for the organization is obvious – to reach a sigma level of six, or the well-known objective of 3.4 defects per million opportunities (DPMO). An organization operating at that level is described by Six Sigma proponents as being world class. More »» |

2006 |
Dr. Ravichandran J., “Six Sigma Milestones: An Overall Sigma Level of an Organization”, Total Quality Management & Business Excellence, vol. 17, no. 8, pp. 973 – 980, 2006. |

2006 |
Dr. Ravichandran J., “Setting up a Quality Specification”, Six Sigma Forum Magazine, vol. 5, no. 2, pp. 26 – 30, 2006. |

2004 |
Dr. Ravichandran J., “Six Sigma”, INDUSTRY2.0, p. 82, 2004. |

1989 |
Dr. Ravichandran J. and Rao, C. V., “On power function of a conditionally specified test procedure in an unbalanced random effects model.”, Journal of Indian Society for Agricultural Statistics, vol. 41, 2 vol., pp. 176-192, 1989. |

1988 |
Dr. Ravichandran J., Han, C. P., and Rao, C. V., “Inference based on conditional speclfication”, Communications in Statistics-Theory and Methods, vol. 17, no. 6, pp. 1945-1964, 1988.[Abstract] The first bibliography in the area of inference based on conditional specification was published in 1977, A second bibliography is compiled, and a combined subject index is given. More »» |

Year of Publication | Title |
---|---|

2015 |
Dr. Ravichandran J., Probability and Random Process for Engineers. India: IK International Publishing House Pvt Ltd, 2015.[Abstract] Probability and Random Processes is one of the most difficult, but most important subjects, for engineering students. It needs a prerequisite in probability and statistics as well. The concepts of random processes are built upon the concepts of probability and statistics and one entire chapter is dedicated to these topics. Students have been in need of a good text book to study random processes from scratch. Due to teaching this subject for a number of years, I understood the need of a book on this subject and that motivated me to spend two years writing it More »» |

2015 |
Dr. Ravichandran J., Probability and Random Processes. New Delhi: IK International, 2015. |

2010 |
Dr. Ravichandran J., Probability and Statistics for Engineers. Wiley India , 2010.[Abstract] Probability and Statistics for Engineers is written for undergraduate students of engineering and physical sciences. Besides the students of B.E. and B.Tech., those pursuing MCA and MCS can also find the book useful. The book is equally useful to six sigma practitioners in industries. A comprehensive yet concise, the text is well-organized in 15 chapters that can be covered in a one-semester course in probability and statistics. Designed to meet the requirement of engineering students, the text covers all important topics, emphasizing basic engineering and science applications. Assuming the knowledge of elementary calculus, all solved examples are real-time, well-chosen, self-explanatory and graphically illustrated that help students understand the concepts of each topic. Exercise problems and MCQs are given with answers. This will help students well prepare for their exams. More »» |

Faculty Research Interest: