View RSS Feed

Uncategorized

Entries with no category

  1. Neural networks made easy (Part 37): Sparse Attention

    by , 04-20-2024 at 02:24 PM
    In the previous article, we discussed relational models which use attention mechanisms in their architecture. We used this model to create an Expert Advisor, and the resulting EA showed good results. However, we noticed that the model's learning rate was lower compared to our earlier experiments. This is due to the fact that the transformer block used in the model is a rather complex architectural solution performing a large number of operations. The number of these operations grows in a quadratic
    ...
    Categories
    Uncategorized
  2. Mastering Model Interpretation: Gaining Deeper Insight From Your Machine Learning Models

    by , 04-12-2024 at 05:31 AM
    In the realm of machine learning, more often than not we think in terms of trade offs. While optimising one metric of performance, we often compromise another performance metric. With the growing evolutionary trend of increasingly larger and more intricate models, understanding, explaining and debugging them become formidable tasks. The intricacies beneath the model's surface, deciphering 'why' our models are making the decisions they are making is vital. Without this clarity how can we confidently
    ...
    Categories
    Uncategorized
  3. Evaluating ONNX models using regression metrics

    by , 04-06-2024 at 02:24 PM
    Regression is a task of predicting a real value from an unlabeled example. A well-known example of regression is estimating the value of a diamond based on such characteristics as size, weight, color, clarity, etc.

    The so-called regression metrics are used to assess the accuracy of regression model predictions. Despite similar algorithms, regression metrics are semantically different from similar loss functions.

    more...
    Categories
    Uncategorized
  4. Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement

    by , 03-30-2024 at 02:24 PM
    This algorithm is based on a self-learning method, where the agent uses information obtained during interaction with the environment to generate "intrinsic" rewards and update its strategy. The algorithm is based on the use of several agent models that interact with the environment and generate various predictions. If the models disagree, it is considered an "interesting" event and the agent is incentivized to explore that space of the environment. In this way, the algorithm incentivizes the agent
    ...
    Categories
    Uncategorized
  5. Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)

    by , 03-28-2024 at 06:32 AM
    Behavior cloning methods, largely based on the principles of supervised learning, show fairly good results. But their main problem remains the search for ideal role models, which are sometimes very difficult to collect. In turn, reinforcement learning methods are able to work with non-optimal raw data. At the same time, they can find suboptimal policies to achieve the goal. However, when searching for an optimal policy, we often encounter an optimization problem that is more relevant in high-dimensional
    ...
    Categories
    Uncategorized
Page 1 of 328 1 2 3 11 51 101 ... LastLast