Environmental issues have become a global concern recently. Countries worldwide are making efforts for carbon neutrality. In the automotive industry, focus has shifted from internal combustion engine vehicle to eco-friendly vehicles such as Electric Vehicles (EVs), Hybrid Electric Vehicles (HEVs), and Fuel Cell Electric Vehicles (FCEVs). For driving strategy, research on vehicle driving method that can reduce vehicle energy consumption, called eco-driving, has been actively conducted recently. Conventional cruise mode driving control is not considered an optimal driving strategy for various driving environments. To maximize energy efficiency, this paper conducted research on eco-driving strategy for EVs-based on reinforcement learning. A longitudinal dynamics-based electric vehicle simulator was constructed using MATLAB Simulink with a road slope. Reinforcement learning algorithms, specifically Deep Deterministic Policy Gradient (DDPG) and Deep QNetwork (DQN), were applied to minimize energy consumption of EVs with a road slope. The simulator was trained to maximize rewards and derive an optimal speed profile. In this study, we compared learning results of DDPG and DQN algorithms and confirmed tendencies by parameters in each algorithm. The simulation showed that energy efficiency of EVs was improved compared to that of cruise mode driving.
It is important to minimize electric energy consumption of a data center that uses enormous electricity for maintaining an adequate indoor temperature. Most data centers have applied the outdoor air cooling method on account of economic feasibility. However, it is necessary that data centers have an efficient control method in order to achieve extra energy savings. In this paper, we propose an artificial intelligence based real-time optimal control method that minimizes electricity consumption and assures safe operation simultaneously. The main idea of our proposed method is to evolutionary search the optimal range of controlled variable during a normally operative condition. Furthermore, an optimal operating condition can be achieved without requiring large-scale data to learn a model. Experimental results demonstrate that indoor temperature of a data center can be constantly controlled safely and cost effectively based on our proposed methodology.