Welcome to the third installment of our "Optimizing a Simple Hedging Strategy" series. In this segment, we'll begin with a brief review of our progress to date. So far, we have developed two key components: the Simple Hedge Expert Advisor (EA) and the Simple Grid EA. This article will focus on further refining the Simple Hedge EA. Our goal is to improve its performance through a combination of mathematical analysis and a brute force approach to find the most effective way to implement this ...
I sometimes receive private messages from those who want to learn how to create their own Expert Advisors or indicators. Although there is a lot of material on this site and on the Internet in general, including very good resources with examples, beginners still need help. Some users seek more consistency in presentation, others require clarity or something else. Sometimes users ask: "Add comments to the code of a working Expert Advisor, I will understand everything and make the same one myself!" ...
All traders hope to maximize the percentage return on their investment by as much as possible, however higher returns usually come at a higher risk. This is the reason why risk adjusted returns are the main measure of performance in the investment industry. There are many different measures of risk adjusted return, each one with its own set of advantages and disadvantages. The Sharpe ratio is a popular risk return measure famous for imposing unrealistic preconditions on the distribution of returns ...
When we design artificial intelligence models, we often need to prepare data first. Good data quality will allow us to get twice the result with half the effort in model training and validation. But our foreign exchange or stock data is special, which contains complex market information and time information, and data labeling is difficult, but we can easily analyze the trend in historical data on the chart. This section introduces a method of making data sets with trend marks by EA ...
Reinforcement learning is a universal platform for learning optimal behavior policies in the environment under exploration. Policy optimality is achieved by maximizing the rewards received from the environment during interaction with it. But herein lies one of the main problems of this approach. The creation of an appropriate reward function often requires significant human effort. Additionally, rewards may be sparse and/or insufficient to express the true learning goal. As one of the options ...