I sometimes receive private messages from those who want to learn how to create their own Expert Advisors or indicators. Although there is a lot of material on this site and on the Internet in general, including very good resources with examples, beginners still need help. Some users seek more consistency in presentation, others require clarity or something else. Sometimes users ask: "Add comments to the code of a working Expert Advisor, I will understand everything and make the same one myself!" ...
Now, we'll explore the remaining array functions in Part 6, which will guarantee that you have a thorough understanding of these useful tools. Our objective is still to cover the basic ideas required for automating trading strategies, regardless of your experience level as a developer or level of familiarity with algorithmic trading. Our goal in delving into the nuances of these functions is to promote a comprehensive comprehension so that each reader can competently traverse the ever-changing terrain ...
The main advantage of relational models is the ability to build dependencies between objects. That enables the structuring of the source data. The relational model can be represented in the form of graphs, in which objects and events are represented as nodes, while relationships show dependencies between objects and events. more...
Reinforcement learning is built on maximizing the reward received from the environment during interaction with it. Obviously, the learning process requires constant interaction with the environment. However, situations are different. When solving some tasks, we can encounter various restrictions on such interaction with the environment. A possible solution for such situations is to use offline reinforcement learning algorithms. They allow you to train models on a limited archive of trajectories ...
In this article, we will get acquainted with the Exploratory Data for Offline RL (ExORL) framework, which was presented in the paper "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning". The results presented in that article demonstrate that the correct approach to data collection has a significant impact on the final learning outcomes. This impact is comparable to that of the choice of learning algorithm and model architecture. more...