View RSS Feed

mql5

  1. Neural networks made easy (Part 53): Reward decomposition

    by , 12-31-2023 at 07:48 AM
    We continue to explore reinforcement learning methods. As you know, all algorithms for training models in this area of machine learning are based on the paradigm of maximizing rewards from the environment. The reward function plays a key role in the model training process. Its signals are usually pretty ambiguous.
    more...
    Categories
    Uncategorized
  2. Data label for time series mining(Part 1)

    by , 12-30-2023 at 02:49 AM
    When we design artificial intelligence models, we often need to prepare data first. Good data quality will allow us to get twice the result with half the effort in model training and validation. But our foreign exchange or stock data is special, which contains complex market information and time information, and data labeling is difficult, but we can easily analyze the trend in historical data on the chart.

    This section introduces a method of making data sets with trend marks by EA
    ...
    Categories
    Uncategorized
  3. Neural networks made easy (Part 52): Research with optimism and distribution correction

    by , 12-29-2023 at 07:48 AM
    One of the basic elements for increasing the stability of Q-function learning is the use of an experience replay buffer. Increasing the buffer makes it possible to collect more diverse examples of interaction with the environment. This allows our model to better study and reproduce the Q-function of the environment. This technique is widely used in various reinforcement learning algorithms, including algorithms of the Actor-Critic family.
    more...
    Categories
    Uncategorized
  4. Neural networks made easy (Part 51): Behavior-Guided Actor-Critic (BAC)

    by , 12-27-2023 at 07:48 AM
    The last two articles were devoted to the Soft Actor-Critic algorithm. As you remember, the algorithm is used to train stochastic models in a continuous action space. The main feature of this method is the introduction of an entropy component into the reward function, which allows us to adjust the balance between environmental exploration and model operation. At the same time, this approach imposes some restrictions on the trained models. Using entropy requires some idea of the probability of taking
    ...
    Categories
    Uncategorized
  5. Neural networks made easy (Part 50): Soft Actor-Critic (model optimization)

    by , 12-25-2023 at 07:48 AM
    We continue to study the Soft Actor-Critic algorithm. In the previous article, we implemented the algorithm but were unable to train a profitable model. Today we will consider possible solutions. A similar question has already been raised in the article "Model procrastination, reasons and solutions". I propose to expand our knowledge in this area and consider new approaches using our Soft Actor-Critic model as an example.
    more...
    Categories
    Uncategorized
Page 1 of 3 1 2 3 LastLast