此文同时在 http://kunth.github.io/2014/04/23/bandit-algorithm.html
Two share links about bandit algorithm
what's (multi-arm) bandit algorithm ?
epsilon-greedy algorithm
With probability 1 – epsilon, the epsilon-Greedy algorithm exploits the best known option.
• With probability epsilon / 2, the epsilon-Greedy algorithm explores the best known option.
• With probability epsilon / 2, the epsilon-Greedy algorithm explores the worst known option.
you need to cope with risk by figuring out which arm has the highest average reward.
what makes a bandit problem special is that we only receive a small amount of the information about the rewards from each arm.
we only find out about the reward that was given out by the arm we actually pulled.
whichever arm we pull, we miss out on information about hte other arms that we didn't pull.
every time we experiment with an arm that isn't the best arm, we lose reawrd because we could, at least in principle, have pulled on a better arm.
Here are two links about multi-bandit algorithm that may help you
Bndit algorithm for website optimization
With probability 1 – epsilon, the epsilon-Greedy algorithm exploits the best known option.
• With probability epsilon / 2, the epsilon-Greedy algorithm explores the best known option.
• With probability epsilon / 2, the epsilon-Greedy algorithm explores the worst known option.
one arm denotes one option
when pulled, any given arm will output a reward.you need to cope with risk by figuring out which arm has the highest average reward.
what makes a bandit problem special is that we only receive a small amount of the information about the rewards from each arm.
we only find out about the reward that was given out by the arm we actually pulled.
whichever arm we pull, we miss out on information about hte other arms that we didn't pull.
every time we experiment with an arm that isn't the best arm, we lose reawrd because we could, at least in principle, have pulled on a better arm.
Here are two links about multi-bandit algorithm that may help you
Algorithms for the multi-armed bandit problem