In this article, I will try to find out if the algorithm is actually good for solving complex problems. In the classical version of the algorithm and in many of its modifications, there are significant limitations associated with the fact that the optimized function must be smooth and continuous, which means it is completely unsuitable for discrete functions. However, in line with the series of articles, all the algorithms under consideration will be changed in such a way (if there are any restrictions) ...
We continue studying different reinforcement learning methods. more...
K-Nearest Neighbors Algorithm is a non-parametric supervised learning classifier that uses proximity to make classifications or predictions about the grouping of an individual data point. While this algorithm is mostly used for classification problems, It can be used for solving a regression problem too, It is often used as a classification algorithm due to its assumption that similar points in the dataset can be found near one another; k-nearest neighbors ...
In the previous article, we started exploring reinforcement learning methods and built our first cross-entropy trainable model. In this article, we continue to study reinforcement learning methods. We will proceed to the Deep Q-Learning method. more...
In the previous articles of this series, we have already seen supervised and unsupervised learning algorithms. This article opens another machine learning chapter: Reinforcement Learning. more...