Welcome to the third installment of our "Optimizing a Simple Hedging Strategy" series. In this segment, we'll begin with a brief review of our progress to date. So far, we have developed two key components: the Simple Hedge Expert Advisor (EA) and the Simple Grid EA. This article will focus on further refining the Simple Hedge EA. Our goal is to improve its performance through a combination of mathematical analysis and a brute force approach to find the most effective way to implement this ...
We continue the series on MQL5 wizard implementation by looking into Neural Architecture Search while specifically dwelling on the role Eigen Vectors can play in making this process, of expediting network training, more efficient. Neural networks are arguably the fitting of a curve to a set of data in that they help come up with a formulaic expression that, when applied to input data (x), provides a target value (y) just like a quadratic equation does with a curve. The x and y data points ...
I sometimes receive private messages from those who want to learn how to create their own Expert Advisors or indicators. Although there is a lot of material on this site and on the Internet in general, including very good resources with examples, beginners still need help. Some users seek more consistency in presentation, others require clarity or something else. Sometimes users ask: "Add comments to the code of a working Expert Advisor, I will understand everything and make the same one myself!" ...
When we design artificial intelligence models, we often need to prepare data first. Good data quality will allow us to get twice the result with half the effort in model training and validation. But our foreign exchange or stock data is special, which contains complex market information and time information, and data labeling is difficult, but we can easily analyze the trend in historical data on the chart. This section introduces a method of making data sets with trend marks by EA ...
Reinforcement learning is a universal platform for learning optimal behavior policies in the environment under exploration. Policy optimality is achieved by maximizing the rewards received from the environment during interaction with it. But herein lies one of the main problems of this approach. The creation of an appropriate reward function often requires significant human effort. Additionally, rewards may be sparse and/or insufficient to express the true learning goal. As one of the options ...