View RSS Feed

mql5

Neural networks made easy (Part 49): Soft Actor-Critic

Rate this Entry
by , 12-24-2023 at 06:48 AM (190 Views)
      
   
In this article, we will focus our attention on another algorithm - Soft Actor-Critic (SAC). It was first presented in the article "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor" (January 2018). The method was presented almost simultaneously with TD3. It has some similarities, but there are also differences in the algorithms. The main goal of SAC is to maximize the expected reward given the maximum entropy of the policy, which allows finding a variety of optimal solutions in stochastic environments.
more...

Submit "Neural networks made easy (Part 49): Soft Actor-Critic" to Google Submit "Neural networks made easy (Part 49): Soft Actor-Critic" to del.icio.us Submit "Neural networks made easy (Part 49): Soft Actor-Critic" to Digg Submit "Neural networks made easy (Part 49): Soft Actor-Critic" to reddit

Categories
Uncategorized

Comments