When we design artificial intelligence models, we often need to prepare data first. Good data quality will allow us to get twice the result with half the effort in model training and validation. But our foreign exchange or stock data is special, which contains complex market information and time information, and data labeling is difficult, but we can easily analyze the trend in historical data on the chart. This section introduces a method of making data sets with trend marks by EA ...
Reinforcement learning is a universal platform for learning optimal behavior policies in the environment under exploration. Policy optimality is achieved by maximizing the rewards received from the environment during interaction with it. But herein lies one of the main problems of this approach. The creation of an appropriate reward function often requires significant human effort. Additionally, rewards may be sparse and/or insufficient to express the true learning goal. As one of the options ...
The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done. We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot ...
In the previous article, we discussed relational models which use attention mechanisms in their architecture. We used this model to create an Expert Advisor, and the resulting EA showed good results. However, we noticed that the model's learning rate was lower compared to our earlier experiments. This is due to the fact that the transformer block used in the model is a rather complex architectural solution performing a large number of operations. The number of these operations grows in a quadratic ...
In the realm of machine learning, more often than not we think in terms of trade offs. While optimising one metric of performance, we often compromise another performance metric. With the growing evolutionary trend of increasingly larger and more intricate models, understanding, explaining and debugging them become formidable tasks. The intricacies beneath the model's surface, deciphering 'why' our models are making the decisions they are making is vital. Without this clarity how can we confidently ...