This is a discussion on Traders Joking within the Traders Joking forums, part of the Non-Related Discussion category; Elephant in the room...

 Tweet
1. Elephant in the room

2. ## Two forex trading newbies talking with each other

Consider a linear regression model xi = a + b * i + ei in time i = 1, 2, ..., n, where the errors ei are white noise with the Laplace distribution. The error density then has the form p (x, c) = 0.5 * c * exp (-c * | x |), log (p (x, c)) = log (0.5) + log (c) -c * | x |

The likelihood function for the noise will have the form L = p (d1, c) * p (d2, c) * ... * p (dn, c), where di = xi-ab * i are the residuals of the model. Logarithm of the likelihood function LL = n * log (0.5) + n * log (c) -c * S, where S = | d1 | + | d2 | + ... + | dn |. S does not depend on the parameter c, therefore the problem of maximizing LL is solved in two stages

This is all true. The question is what exactly to take for the sliding between the two rows. For example, there is a traditional opinion that the length of the perpendicular to the regression line. But it seems to me that this is not quite the right way. For it gives a separation not relative to the previous values, but relative to a certain midpoint of them. Such a substance as the "asymmetry" of the opening is lost, and I would like to feel it.
And what do you think?
• minimization of S (since c> 0) with respect to a and b ?
or
• maximization of LL with respect to the parameter c, with the found value of S ?

3. Originally Posted by mql5

Consider a linear regression model xi = a + b * i + ei in time i = 1, 2, ..., n, where the errors ei are white noise with the Laplace distribution. The error density then has the form p (x, c) = 0.5 * c * exp (-c * | x |), log (p (x, c)) = log (0.5) + log (c) -c * | x |

The likelihood function for the noise will have the form L = p (d1, c) * p (d2, c) * ... * p (dn, c), where di = xi-ab * i are the residuals of the model. Logarithm of the likelihood function LL = n * log (0.5) + n * log (c) -c * S, where S = | d1 | + | d2 | + ... + | dn |. S does not depend on the parameter c, therefore the problem of maximizing LL is solved in two stages

This is all true. The question is what exactly to take for the sliding between the two rows. For example, there is a traditional opinion that the length of the perpendicular to the regression line. But it seems to me that this is not quite the right way. For it gives a separation not relative to the previous values, but relative to a certain midpoint of them. Such a substance as the "asymmetry" of the opening is lost, and I would like to feel it.
And what do you think?
• minimization of S (since c> 0) with respect to a and b ?
or
• maximization of LL with respect to the parameter c, with the found value of S ?
I am impressed with your knowledge. So, according to your explanation - it is possible to make money on Forex, right?

more..

5. ## TITANIC - the new version

TITANIC - the new version

6. ## sleepy

Sleepy for weekend

Two friends

8. ## sleeping

Sleeping

9. THE TOYS - อาหมวยหาย (阿妹走) [Official MV]

Page 59 of 59 First ... 9 49 57 58 59

1. ###### Tanya Khovanova's Math Blog » Blog Archive » The Sayings of Mikhail Zhvanetsky
04-05-2015, 01:59 AM