View RSS Feed

mql5

Data Science and Machine Learning (Part 06): Gradient Descent

Rate this Entry
by , 07-31-2022 at 02:53 AM (371 Views)
      
   
The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent.
----------------

more...

Submit "Data Science and Machine Learning (Part 06): Gradient Descent" to Google Submit "Data Science and Machine Learning (Part 06): Gradient Descent" to del.icio.us Submit "Data Science and Machine Learning (Part 06): Gradient Descent" to Digg Submit "Data Science and Machine Learning (Part 06): Gradient Descent" to reddit

Categories
Uncategorized

Comments