Facility Layout Problem (FLP) aims to optimize arrangement of facilities to enhance productivity and minimize costs. Traditional methods face challenges in dealing with the complexity and non-linearity of modern manufacturing environments. This study introduced an approach combining Reinforcement Learning (RL) and simulation to optimize manufacturing line layouts. Deep Q-Network (DQN) learns to reduce unused space, improve path efficiency, and maximize space utilization by optimizing facility placement and material flow. Simulations were used to validate layouts and evaluate performance based on production output, path length, and bending frequency. This RL-based method offers a more adaptable and efficient solution for FLP than traditional techniques, addressing both physical and operational optimization.
In this paper, we propose a deep Q-network-based resource allocation method for efficient communication between a base station and multiple Unmanned Aerial Vehicles (UAVs) in environments with limited wireless resources. This method focused on maximizing the throughput of UAV to Infrastructure (U2I) links while ensuring that UAV to UAV (U2U) links could meet their data transmission time constraints, even when U2U links share the wireless resource used by U2I links. The deep Q-network agent uses the Channel State Information (CSI) of both U2U and U2I links, along with the remaining time for data transmission, as state, and determines optimal Resource Block (RB) and transmission power for each UAV. Simulation results demonstrated that the proposed method significantly outperformed both random allocation and CSI-based greedy algorithms in terms of U2I link throughput and the probability of meeting U2U link time constraints.