View RSS Feed

mql5

  1. Category Theory in MQL5 (Part 11): Graphs

    by , 01-05-2024 at 01:49 AM
    In our previous article, we delved into monoid groups, by exploring concept of symmetry within typical monoids. In introducing an additional axiom that all members of a monoid group must possess an inverse and restricting binary operations between mirror elements to yield the identity element, we extended applicability of monoids at crucial trade decision points. Building upon this, we now continue our study of category theory and its practical applications in trade system development by examining
    ...
    Categories
    Uncategorized
    Attached Thumbnails Attached Images  
  2. Neural networks made easy (Part 53): Reward decomposition

    by , 12-31-2023 at 06:48 AM
    We continue to explore reinforcement learning methods. As you know, all algorithms for training models in this area of machine learning are based on the paradigm of maximizing rewards from the environment. The reward function plays a key role in the model training process. Its signals are usually pretty ambiguous.
    more...
    Categories
    Uncategorized
  3. Data label for time series mining(Part 1)

    by , 12-30-2023 at 01:49 AM
    When we design artificial intelligence models, we often need to prepare data first. Good data quality will allow us to get twice the result with half the effort in model training and validation. But our foreign exchange or stock data is special, which contains complex market information and time information, and data labeling is difficult, but we can easily analyze the trend in historical data on the chart.

    This section introduces a method of making data sets with trend marks by EA
    ...
    Categories
    Uncategorized
  4. Neural networks made easy (Part 52): Research with optimism and distribution correction

    by , 12-29-2023 at 06:48 AM
    One of the basic elements for increasing the stability of Q-function learning is the use of an experience replay buffer. Increasing the buffer makes it possible to collect more diverse examples of interaction with the environment. This allows our model to better study and reproduce the Q-function of the environment. This technique is widely used in various reinforcement learning algorithms, including algorithms of the Actor-Critic family.
    more...
    Categories
    Uncategorized
  5. Neural networks made easy (Part 51): Behavior-Guided Actor-Critic (BAC)

    by , 12-27-2023 at 06:48 AM
    The last two articles were devoted to the Soft Actor-Critic algorithm. As you remember, the algorithm is used to train stochastic models in a continuous action space. The main feature of this method is the introduction of an entropy component into the reward function, which allows us to adjust the balance between environmental exploration and model operation. At the same time, this approach imposes some restrictions on the trained models. Using entropy requires some idea of the probability of taking
    ...
    Categories
    Uncategorized
Page 4 of 328 FirstFirst ... 2 3 4 5 6 14 54 104 ... LastLast