Working with ONNX models in float16 and float8 formats
Quote:
With the advancement of machine learning and artificial intelligence technologies, there is a growing need to optimize processes for working with models. The efficiency of model operation directly depends on the data formats used to represent them. In recent years, several new data types have emerged, specifically designed for working with deep learning models.
In this article, we will focus on two such new data formats - float16 and float8, which are beginning to be actively used in modern ONNX models. These formats represent alternative options to more precise but resource-intensive floating-point data formats. They provide an optimal balance between performance and accuracy, making them particularly attractive for various machine learning tasks. We will explore the key characteristics and advantages of float16 and float8 formats, as well as introduce functions for converting them to standard float and double formats.
This will help developers and researchers better understand how to effectively use these formats in their projects and models. As an example, we will examine the operation of the ESRGAN ONNX model, which is used for image quality enhancement.
more...
Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning
Quote:
Recently, offline reinforcement learning methods have become widespread, which promises many prospects in solving problems of varying complexity. However, one of the main problems that researchers face is the optimism that can arise while learning. The agent optimizes its strategy based on the data from the training set and gains confidence in its actions. But the training set is quite often not able to cover the entire variety of possible states and transitions of the environment. In a stochastic environment, such confidence turns out to be not entirely justified. In such cases, the agent's optimistic strategy may lead to increased risks and undesirable consequences.
more...
MQL5 Wizard Techniques you should know (Part 13). DBSCAN for Expert Signal Class
Quote:
These series of articles, on the MQL5 Wizard, are a segue on how often abstract ideas in Mathematics of other fields of life can be enlivened as trading systems and tested or validated before any serious commitments is made on their premise. This ability to take simple and not fully implemented or envisaged ideas and explore their potential as trading systems is one of the gems presented by the MQL5 wizard assembly for expert advisers. The expert classes of the wizard furnish a lot of the mundane features required by any expert adviser especially as it relates to opening and closing trades but also in overlooked aspects like executing decisions only on a new bar formation.
So, in keeping this library of processes as a separate aspect of an expert adviser, with the MQL5 Wizard any idea can not only be tested independently, but also compared on a somewhat equal footing to any other ideas (or methods) that could be under consideration. In these series we have looked at alternative clustering methods like the agglomerative clustering as well as the k-means clustering.
more...
Neural networks made easy (Part 66): Exploration problems in offline learning
Neural networks made easy (Part 67): Using past experience to solve new tasks
Quote:
Reinforcement learning is built on maximizing the reward received from the environment during interaction with it. Obviously, the learning process requires constant interaction with the environment. However, situations are different. When solving some tasks, we can encounter various restrictions on such interaction with the environment. A possible solution for such situations is to use offline reinforcement learning algorithms. They allow you to train models on a limited archive of trajectories collected during preliminary interaction with the environment, while it was available.
more...
Overcoming ONNX Integration Challenges
Quote:
ONNX (Open Neural Network Exchange) revolutionizes the way we make sophisticated AI-based mql5 programs. This new technology to MetaTrader 5 is the way forward to machine learning as it shows a lot of promise like no other for its purpose however, ONNX comes with a couple of challenges that can give you headaches if you have no clue how to solve them whatsoever.
This article assumes you have a basic understanding of machine learning and AI theory, and that you have at least tried to
use ONNX models in mql5 once or twice.
more...
Data Science and ML (Part 22): Leveraging Autoencoders Neural Networks for Smarter Trades by Moving from Noise to Signal
Quote:
In this article, we will see how we can use an autoencoder neural network in the financial space to help us remove noise in the market so that we can discover trading opportunities.
This article is an easy read if you have a basic understanding of
ONNX,
PCA, and
Neural Networks in general.
more...
Causal inference in time series classification problems
Quote:
In the
previous article, we have thoroughly examined training via meta learner and cross-validation, as well as saving models in the ONNX format. I have also noted that machine learning models are not capable of finding patterns out of the box in disparate and contradictory data. In this case, it is very important what exactly is sent to the input and output of a neural network or any other machine learning algorithm.
...
This article describes an attempt to understand some
causal inference techniques in relation to algorithmic trading.
more...
Data Science and ML (Part 23): Why LightGBM and XGBoost outperform a lot of AI models?
Quote:
Gradient Boosted Decision Trees (GBDT) are a powerful machine learning technique used primarily for regression and classification tasks. They combine the predictions of multiple weak learners, usually decision trees, to create a strong predictive model.
The core idea is to build models sequentially, each new model attempting to correct the errors made by the previous ones.
Have gained much popularity in the machine learning community as the algorithms of choice for many winning teams in machine learning competitions. In this article, we are going to discover how we can use these accurate models in our trading applications.
more...
Population optimization algorithms: Artificial Multi-Social Search Objects (MSO)
Quote:
In the previous
article, we considered the evolution of social groups where they moved freely in the search space. However, here I propose that we change this concept and assume that groups move between sectors, jumping from one to another. All groups have their own centers, which are updated at each iteration of the algorithm. In addition, we introduce the concept of memory both for the group as a whole and for each individual particle in it. Using these changes, our algorithm now allows groups to move from sector to sector based on information about the best solutions.
more...