Kids Library Home

Welcome to the Kids' Library!

Search for books, movies, music, magazines, and more.

     
Available items only
Print Material
Author Zhang, Xiangyu (of National Renewable Energy Laboratory), author.

Title Restoring distribution system under renewable uncertainty using reinforcement learning / Xiangyu Zhang [and three others].

Publication Info. [Golden, Colo.] : National Renewable Energy Laboratory, 2020.

Copies

Description 1 online resource (12 pages, 1 unnumbered page) : color illustrations.
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
Series NREL/PR ; 2C00-78103
NREL/PR ; 2C00-78103.
Note Slideshow presentation.
"IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, November 11-13, 2020."
"Funding provided by the Improving Distribution System Resiliency via Deep Reinforcement Learning Project funded by the U.S. Department of Energy Office of Electricity Advance Grid Modeling program"--Page 13.
Funding DE-AC36-08GO28308
Note Description based on online resource; title from PDF title page (NREL, viewed April 29, 2022).
Summary Distributed energy resources (DER) in distribution systems, including renewable generation, micro-turbine, and energy storage, can be used to restore critical loads following extreme events to increase grid resiliency. However, properly coordinating multiple DERs in the system for multi-step restoration process under renewable uncertainty and fuel availability is a complicated sequential optimal control problem. Due to its capability to handle system non-linearity and uncertainty, reinforcement learning (RL) stands out as a potentially powerful candidate in solving complex sequential control problems. Moreover, the offline training of RL provides excellent action readiness during online operation, making it suitable to problems such as load restoration, where in-time, correct and coordinated actions are needed. In this study, a distribution system prioritized load restoration based on a simplified single-bus system is studied: with imperfect renewable generation forecast, the performance of an RL controller is compared with that of a deterministic model predictive control (MPC). Our experiment results show that the RL controller is able to learn from experience, adapt to the imperfect forecast information and provide a more reliable restoration process when compared with the baseline MPC controller.
Subject Reinforcement learning -- United States.
Electric power distribution -- United States.
Distributed generation of electric power -- United States.
Renewable energy sources -- United States.
Apprentissage par renforcement (Intelligence artificielle) -- États-Unis.
Électricité -- Production -- Génération répartie -- États-Unis.
Énergies renouvelables -- États-Unis.
Distributed generation of electric power
Electric power distribution
Reinforcement learning
Renewable energy sources
United States https://id.oclc.org/worldcat/entity/E39PBJtxgQXMWqmjMjjwXRHgrq
Indexed Term grid resiliency
load restoration
micro grid
reinforcement learning
Added Author National Renewable Energy Laboratory (U.S.), issuing body.
Standard No. 1821621 OSTI ID
Gpo Item No. 0430-P-09 (online)
Sudoc No. E 9.22:NREL/PR-2 C 00-78103

 
    
Available items only