Programs
- M. Tech. in Automotive Engineering -Postgraduate
- Fellowship in Interventional Pulmonology and Respiratory Critical Care -Fellowship
Publication Type : Conference Proceedings
Publisher : IEEE
Source : 2024 6th International Conference on Electrical, Control and Instrumentation Engineering (ICECIE)
Url : https://doi.org/10.1109/icecie63774.2024.10815694
Campus : Bengaluru
School : School of Engineering
Year : 2024
Abstract : A Single-Input Single-Output (SISO) interacting tank system is widely used in various industries for controlling liquid levels. It is a common setup in pharmaceutical industries where they store and mix chemicals. It is necessary to study the control of fluid levels in interconnected tanks to know the system dynamics and stability in feedback. In this paper, a system with two tanks is considered, each with different inflow and outflow rate, and the liquid levels in the tanks are influenced by the interaction between them. The liquid level data is collected from several experiments on the two-tank system and is modelled using System Identification (SI). A Proportional-Integral (PI) controller is used to regulate the liquid levels in the tanks by adjusting the inflow based on feedback. Reinforcement Learning (RL) is a booming technology used in industries because of it's ability to optimize complex decision-making processes, and good adaptability for different changing conditions without requiring a detailed mathematical model of the system. In this research, Twin Delayed Deep Deterministic Policy Gradient (TD3) and Deep Deterministic Policy Gradient (DDPG) RL agents are used to optimize PI tuning. The results highlight the performance of each agent in terms of reducing the settling time, rise time and the overshoot thereby enhancing the stability of the system. The performance metrics and error metrics for both TD3 and DDPG algorithms are observed. Rise time and settling time of TD3 is 8.63 sec and 25.74 sec respectively whereas for DDPG it is 26.81 sec and 79.92 sec, which is very high when compared to TD3. The comparison of error metrics namely ITAE, IAE and ISE show that TD3 has less error than DDPG. Overall, the experimental results conclude that TD3 is better for tuning a PID than DDPG.
Cite this Research Publication : U. Kruthika, Swetha Ankireddy, Govardhan Subudhi, Meghna Baruwa, Surekha Paneerselvam, A Reinforcement Learning Framework for Control of Two-Tank Interacting System Using PID Controller, 2024 6th International Conference on Electrical, Control and Instrumentation Engineering (ICECIE), IEEE, 2024, https://doi.org/10.1109/icecie63774.2024.10815694