Back close

Adaptive Scheduling and Managing Resources in Changing Industrial Settings: Deep Reinforcement Learning for the Internet of Things

Publication Type : Conference Paper

Publisher : IEEE

Source : 2025 IEEE 2nd International Conference on Information Technology, Electronics and Intelligent Communication Systems (ICITEICS)

Url : https://doi.org/10.1109/iciteics64870.2025.11341125

Campus : Nagercoil

School : School of Computing

Year : 2025

Abstract : The Industrial Internet of Things (IIoT) has been increasingly introduced to industrial applications, as it facilitates real-time process monitoring, machine control, and device networks of things. In the rapidly changing, complex resources managing and scheduling process, it is becoming increasingly difficult to use the traditional techniques to meet the requirements of industrial environments. These environments exhibit dynamic workloads, unexpected task interruption and/or cancellation, energy limitation and diverse device capabilities for which necessitate intelligent decision-making frameworks that can adapt with time. In this paper, we investigate the application of Deep Reinforcement Learning (DRL), as a promising solution to tackle these challenges, for intelligent scheduling and resource provisioning in industrial loT (1IoT) systems. Deep Reinforcement Learning (DRL) integrates the decision-making nature of Reinforcement Learning (RL) and the representative capacity of Deep Neural Network (DNN) models and enables agents to acquire optimal policies from the input data streams in high dimensionality such as those occurring in industrial settings. With MDP modelling for resource allocation problem, DRL agents are trained to allocate resources in a dynamic manner, taking into account the current state in real time, including the availability of machines, the emergency of tasks, the power of devices, and the network situation. DRL algorithms including DQN, PPO and A2e are tested in different simulation tasks of industrial loT simulations for the optimization of various metrics such as latency, throughput, energy efficiency and fault tolerance. The simulation is set up as a smart factory with a mix of loT devices that are handling production, monitoring and logistics. DRL agents can be trained to reason about decisions in terms of job scheduling, task offloading, and energy budgeting, while meeting fluctuating system conditions. Reward functions are handcrafted to strike a balance between mandates such as performance, efficiency, reliability, and so on. Experimental studies demonstrate that DRL has great advantage over traditional fixed and rule-based scheduling policies especially when there exist certain and rapid-varying environment information. This work not only proves the effectiveness of DRL in improving operational intelligence in industrial loT systems, but also opens up a path to highly scalable and adaptive decision-making in Industry 4.0. It also helps to address the deployment of autonomous systems that are capable of learning and continuously adapting to increase production and reduce the consumption of resources as well as the associated down time of operations. The implications are significant, and pave the way towards self-organizing, more robust industrial ecosystems.

Cite this Research Publication : M. Subalatha, V. Thanammal Indu, Maram Y. Al-Safarini, Bhukya Balakrishna, P. Muthuchelvi, Abinash. M, Adaptive Scheduling and Managing Resources in Changing Industrial Settings: Deep Reinforcement Learning for the Internet of Things, 2025 IEEE 2nd International Conference on Information Technology, Electronics and Intelligent Communication Systems (ICITEICS), IEEE, 2025, https://doi.org/10.1109/iciteics64870.2025.11341125

Admissions Apply Now