These series of articles, on the MQL5 Wizard, are a segue on how often abstract ideas in Mathematics of other fields of life can be enlivened as trading systems and tested or validated before any serious commitments is made on their premise. This ability to take simple and not fully implemented or envisaged ideas and explore their potential as trading systems is one of the gems presented by the MQL5 wizard assembly for expert advisers. The expert classes of the wizard furnish a lot of the mundane ...
Welcome back, fellow traders and aspiring algorithmic enthusiasts! As we step into the third chapter of our MQL5 journey, we stand at the crossroads of theory and practice, poised to unravel the secrets behind arrays, custom functions, preprocessors, and event handling. Our mission is to empower every reader, regardless of their programming background, with a profound understanding of these fundamental MQL5 elements. more...
I sometimes receive private messages from those who want to learn how to create their own Expert Advisors or indicators. Although there is a lot of material on this site and on the Internet in general, including very good resources with examples, beginners still need help. Some users seek more consistency in presentation, others require clarity or something else. Sometimes users ask: "Add comments to the code of a working Expert Advisor, I will understand everything and make the same one myself!" ...
We continue the theme of environmental exploration in reinforcement learning. In previous articles within this series, we have already looked at algorithms for exploring the environment through curiosity and disagreement in an ensemble of models. Both approaches exploited intrinsic rewards to motivate the agent to perform different actions in similar situations while exploring new areas. But the problem is that the intrinsic reward decreases as the environment gets better explored. In complex cases ...
This algorithm is based on a self-learning method, where the agent uses information obtained during interaction with the environment to generate "intrinsic" rewards and update its strategy. The algorithm is based on the use of several agent models that interact with the environment and generate various predictions. If the models disagree, it is considered an "interesting" event and the agent is incentivized to explore that space of the environment. In this way, the algorithm incentivizes the agent ...