View RSS Feed

Uncategorized

Entries with no category

  1. Developing a multi-currency Expert Advisor (Part 2): Transition to virtual positions of trading strategies

    by , 09-26-2024 at 04:55 AM
    In the previous article, we started developing a multi-currency EA that works simultaneously with various trading strategies. At the first stage there were only two different strategies. They represented the implementation of the same trading idea, worked on the same trading instrument (symbol) and chart period (timeframe). They differed from each other only in the numerical values of the parameters.

    We are now only interested in testing the suitability of this approach, and
    ...
    Categories
    Uncategorized
  2. Market Reactions and Trading Strategies in Response to Dividend Announcements: Evaluating the Efficient Market Hypothesis in Stock Trading

    by , 09-10-2024 at 03:45 PM
    This paper intends to analyze the impact of dividend announcement on the stock market returns that investors earn.
    more...
    Categories
    Uncategorized
  3. Neural networks made easy (Part 71): Goal-Conditioned Predictive Coding GCPC)

    by , 08-22-2024 at 07:30 AM
    Goal-Conditioned Behavior Cloning (BC) is a promising approach for solving various offline reinforcement learning problems. Instead of assessing the value of states and actions, BC directly trains the Agent behavior policy, building dependencies between the set goal, the analyzed environment state and the Agent's action. This is achieved using supervised learning methods on pre-collected offline trajectories. The familiar Decision Transformer method and its derivative algorithms have demonstrated
    ...
    Categories
    Uncategorized
  4. Neural networks made easy (Part 70): Closed-Form Policy Improvement Operators (CFPI)

    by , 08-08-2024 at 07:30 AM
    The approach to optimizing the Agent policy with constraints on its behavior turned out to be promising in solving offline reinforcement learning problems. By exploiting historical transitions, the Agent policy is trained to maximize a learned value function.

    Behavior constrained policy can help to avoid a significant distribution shift in relation to Agent actions, which provides sufficient confidence in the assessment of the action costs. In the previous article we got acquainted
    ...
    Categories
    Uncategorized
  5. Neural networks made easy (Part 69): Density-based support constraint for the behavioral policy (SPOT)

    by , 07-31-2024 at 07:30 AM
    Offline reinforcement learning allows the training of models based on data collected from interactions with the environment. This allows a significant reduction of the process of interacting with the environment. Moreover, given the complexity of environmental modeling, we can collect real-time data from multiple research agents and then train the model using this data.

    At the same time, using a static training dataset significantly reduces the environment information available to us.
    ...
    Categories
    Uncategorized
Page 2 of 353 FirstFirst 1 2 3 4 12 52 102 ... LastLast