Не ви допада? Няма проблеми! При нас имате възможност за връщане в рамките на 30 дни
Няма да сбъркате с подаръчен ваучер. Получателят може да избере нещо от нашия асортимент с подаръчен ваучер.
30 дни за връщане на стоката
In this thesis, a dynamic theory of learning, also§called ``online learning'' in computer science, §is presented as stochastic approximations of the§regression function from reproducing kernel Hilbert§spaces (RKHS). It starts from a probability measure§on an input-output space, with sequential sampling in§an independent and identically distributed way.§Online learning algorithms recursively exploit§samples as a departure from the ``batch learning''§which has an access to all data once. The algorithms§are based on stochastic approximations of the§regression function from RKHS. Novel probabilistic§exponential inequalities in Hilbert spaces from§Russian school are exploited to study some martingale§or reverse martingale expansions of the error. Tight§probabilistic upper bounds are obtained in the sense§that in certain range of complexity classes, online§learning algorithms achieve the same convergence§rates as batch learning, and thus asymptotically§reach the optimal rates in some senses.