Performance of Multi-armed Bandit Algorithms

Performance of Multi-armed Bandit Algorithms

Basic Setting

  The multi-armed bandit problem is a classic reinforcement learning problem which indicated the dilemma between the exploitation and exploration.
  We consider a time-slotted bandit system ($t = 0,1,2,\dots$) with three arms. We denote the arm set as $\{1,2,3\}$. Pulling each arm $j$ ($j\in\{1,2,3\}$) will obtain a reward $r_j$, which satisfies a Bernoulli distribution with mean $\theta_j$,

where $\theta_j$ are parameters within (0,1) for $j\in\{1,2,3\}$

  Now we run this bandit system for $N$ ($N$ ≫ 3) time slots. At each time slot $t$, we choose one and only one arm from these three arms, which we denote as $I(t)\in \{1,2,3\}$. Then we pull the arm $I(t)$ and obtain a reward $r_{I(t)}$. Our objective is to find an optimal policy to choose an arm $I(t)$ at each time slot $t$ such that the expectation of the aggregated reward is maximized, $i.e.$,

If we know the values of $\theta_j, j\in\{1,2,3\}$, this problem is trivial. Since $r_I(t)\sim$ Bern($\theta_{I(t)}$),

Let $I(t) = I^* = \text{argmax}\theta_j$ for $t=1,2,\dots, N$, then

  However, in reality, we do not know the values of $\theta_j, j\in\{1,2,3\}$. We need to estimate the values $\theta_j$ via empirical samples, and then make the decisions at each time slot.

Simulation

  Suppose we obtain the Bernoulli distribution parameters from an oracle, which are shown in the following table below. Choose $N$ = 10000 and compute the theoretically maximized expectation of aggregate rewards over $N$ time slots. We call it the oracle value. Note that these parameters $\theta_j, j\in\{1,2,3\}$ and oracle values are unknown to the three bandit algorithms.

theta_table.png

Therefore, we have that

We design three algorithms as follows to maximize the aggregated reward.

$\epsilon$ - Greedy Algorithm

Main idea:

  An $\epsilon$ - greedy policy is that simply explores (choose one arm randomly) with probability $\epsilon$ , or otherwise greedily exploits the information we already have and thereby choose the arm with current hightest reward.
  Let $\hat{\theta}(j)$ be the average expectation reward for arm $j$ ($\ j\in\{1,2,3\}\ $). $I(t)\in\{1,2,3\}$ denotes the arm we choose; $count$$(j)$ is the number of how many times arm $j$ has been chosen.
  Therefore, for each time slots (we have N in total), we decide the arm $I(t)$ by explorarion or exploitation. After that, $count$($I(t)$) increases by 1. Suppose that arm $I(t)$ has been picked for $n$ times and results in reward $x_1,x_2,\dots,x_n$, $count(I(t)) = n+1$.

Pseudocode:
Greedy_psedo.png

Code and Results
  We compute the expectation, and plot the actual reward and average reward against time slot ($t=1,2,\dots,N$). Due to the large scale of $N$, to see more clearly, we set the $x$ axis to be the $\lg N$.
  To see the exact performance of each earm, we also plot the the reward of each arm. If the arm is not chosen, we set the reward to be -1 as a default assignment. Each time when a arm is chosen, we alternate the default reward ($-1$) with the actual reward ( $0$ or $1$ ).
  To reduce the experiment error, we repeat the experiment for $100$ times and take the average values as outputs.
  Here are the code and results.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
import numpy as np 
import matplotlib.pyplot as plt
import math
import random
from scipy.stats import bernoulli
from matplotlib import style

for o_ in range(3):
# #initialize
N = 10000
theta_j = [0.4, 0.6, 0.8]
result = []
eposilong = eval(input("epsilon:"))
temp = [];temp2 = []
a0=[]; a1=[]; a2=[]
for _ in range(100):
reward = [];avg = []
r = []
arm0 = [-1]*N; amr1 = [-1]*N; arm2 = [-1]*N
HTHETA_J = [0.0, 0.0, 0.0]
count_j = [0, 0, 0]
exp = 0
for i in range(3):
r.append(list(bernoulli.rvs(size = N, p = theta_j[i])))
for t in range(N):
choice = int(bernoulli.rvs(size = 1,p = eposilong)) #indicator of whether exploitation or not
if 1 == choice:
index = random.randint(0,2) #index: I(t)
else:
index = HTHETA_J.index(max(HTHETA_J[2],HTHETA_J[1],HTHETA_J[0]))
count_j[index] += 1
HTHETA_J[index] = HTHETA_J[index] + 1/count_j[index]*(r[index][t]- HTHETA_J[index])
reward.append(r[index][t])
if 0==index:
arm0[t] = r[index][t]
elif 1==index:
amr1[t] = r[index][t]
else:
arm2[t] = r[index][t]
if 1==r[index][t]:
exp += 1
avg.append(exp/(t+1))
result.append(exp)
temp.append(reward); temp2.append(avg)
a0.append(arm0); a1.append(amr1); a2.append(arm2)

reward = []; avg = []
for i in range(N):
t = 0
for j in range(len(temp)):
t += temp[j][i]
reward.append(t/100)

for i in range(N):
t = 0
for j in range(len(temp2)):
t += temp2[j][i]
avg.append(t/100)

arm0=[]; amr1=[]; arm2=[]
for i in range(N):
t0=0; t1=0; t2=0
for j in range(100):
t0 += a0[j][i]; t1 += a1[j][i]; t2 += a2[j][i]
arm0.append(t0/100); amr1.append(t1/100); arm2.append(t2/100)


print("mean=",np.mean(result))
style.use('ggplot')

x = []
for i in list(range(N)):
x.append(math.log10(i+1))

fig1= plt.figure()
plt.title("Epsilon-Greedy")
plt.xlabel('lg(N)')
plt.plot(x,reward,color = "mediumseagreen",label="reward")
plt.plot(x,avg,color = "grey",label="average reward")
plt.legend()
plt.show()

#plt.sca(fig2)
fig2 = plt.figure()
plt.title("Epsilon-Greedy")
plt.xlabel('lg(N)')
plt.plot(x,arm0, color = "#df405a", label = "arm1", alpha = 0.8)
plt.plot(x,amr1, color = "#3b8686",label = "arm2", alpha = 0.8)
plt.plot(x,arm2, color = "#0080ff", label = "arm3", alpha = 0.8)
plt.legend()
plt.show()
epsilon:0.1
mean= 7787.06
epsilon:0.5
mean= 7002.36
epsilon:0.9
mean= 6199.36

According to the simulation, we observe that:

  • $\epsilon$ denotes the probability of exploration. Therefore, when $\epsilon = 0.1$, the average reward almost converges to 0.8 (but still not reach 0.8, because we always has $\epsilon$ probability to explore), the actual reward almost oscillates near 0.8. We also see that as time goes by, we are more likely to choose arm 3 and only choose arm 1 or 2 in the exploration phase.
  • Compared to $\epsilon = 0.1$, when $\epsilon$ is higher, we get worse aggregated expectation. Because when it comes to large time slot, we already know that the third arm deliver the best performance. If $\epsilon$ is still high and if we are still exploring, it would drag down the aggregated reward. Additionally, if $\epsilon$ is too high, we only have small probability to exploit arm3 as shown in the figure.

However, is it always the case that — when $\epsilon$ increases, we get worse performance? We set $\epsilon = 0.005, 0.01, 0.1, 0.3, 0.5, 0.75$, set $N=1500$ and do the simulation again.
Here are the code the result.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
import numpy as np 
import matplotlib.pyplot as plt
import math
import random
from scipy.stats import bernoulli
from matplotlib import style

N = 1500
e = [0.005, 0.01, 0.1,0.3, 0.5, 0.75]
c = ["#fe4365","#3b8686","#6a60a9","#f9a11b","#03a6ff"]
style.use('ggplot')
fig = plt.figure()
x = []
for i in list(range(N)):
x.append(math.log10(i+1))
plt.xlabel('lg(N)')
for o_ in range(5):
theta_j = [0.4, 0.6, 0.8]
eposilong = e[o_]
reward = []
avg = []
r = []
HTHETA_J = [0.0, 0.0, 0.0]
count_j = [0, 0, 0]
exp = 0
for i in range(3):
r.append(list(bernoulli.rvs(size = N, p = theta_j[i])))
for t in range(N):
choice = int(bernoulli.rvs(size = 1,p = eposilong)) #indicator of whether exploitation or not
if 1 == choice:
index = random.randint(0,2) #index: I(t)
else:
index = HTHETA_J.index(max(HTHETA_J[2],HTHETA_J[1],HTHETA_J[0]))
count_j[index] += 1
HTHETA_J[index] = HTHETA_J[index] + 1/count_j[index]*(r[index][t]- HTHETA_J[index])
reward.append(r[index][t])
if 1==r[index][t]:
exp += 1
avg.append(exp/(t+1))
plt.plot(list(range(N)),avg,color =c[o_],label=e[o_],linewidth = 1)
plt.legend()
plt.show()

png

$\ \ \ $We notice that if $\epsilon$ is too small, we do too little exploration, therefore, we may not get an accurate understanding of the arm performance. Therefore, the total reward is decreased.

Improvement of $\epsilon$-Greedy Algorithm

$\ \ \ $We design an improved $\epsilon$-Greedy algorithm to get better performance. The motivation is that — when $t$ is small, we set large $\epsilon$ to explore and obtain information quickly; when $t$ is large, we set small $\epsilon$ and exploit the information we already have. Shown as the follows, we set $\epsilon = t^{-\frac{1}{2}}$. We remain the $N=10000$ and repeat the experiments for 100 times just as previous experiments.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
import numpy as np 
import matplotlib.pyplot as plt
import math
import random
from scipy.stats import bernoulli
from matplotlib import style

N = 10000
theta_j = [0.4, 0.6, 0.8]
result = []
eposilong = 0.0
temp = []
temp2 = []
for _ in range(100):
reward = []
avg = []
r = []
HTHETA_J = [0.0, 0.0, 0.0]
count_j = [0, 0, 0]
exp = 0
for i in range(3):
r.append(list(bernoulli.rvs(size = N, p = theta_j[i])))
for t in range(N):
eposilong = 1/((t+1)**(1/2)) ####This is the improvement.
choice = int(bernoulli.rvs(size = 1,p = eposilong)) #indicator of whether exploitation or not
if 1 == choice:
index = random.randint(0,2) #index: I(t)
else:
index = HTHETA_J.index(max(HTHETA_J[2],HTHETA_J[1],HTHETA_J[0]))
count_j[index] += 1
HTHETA_J[index] = HTHETA_J[index] + 1/count_j[index]*(r[index][t]- HTHETA_J[index])
reward.append(r[index][t])
if 1==r[index][t]:
exp += 1
avg.append(exp/(t+1))
result.append(exp)
temp.append(reward)
temp2.append(avg)

reward = []
avg = []
for i in range(N):
t = 0
for j in range(len(temp)):
t += temp[j][i]
reward.append(t/100)

for i in range(N):
t = 0
for j in range(len(temp2)):
t += temp2[j][i]
avg.append(t/100)

print("mean=",np.mean(result))
style.use('ggplot')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(result, color = "#58c49f", bins=20, edgecolor = "darkslategrey")
plt.xlabel("Expectation")
plt.ylabel("Frequency")
plt.title("Improved Epsilon-Greedy")
x = []
for i in list(range(N)):
x.append(math.log10(i+1))
fig,ax = plt.subplots()
plt.xlabel('log(N)')
plt.plot(x,reward,color = "seagreen", label="reward")
plt.plot(x,avg,color = "grey",label="average reward")
plt.legend()
plt.show()
mean= 7956.26

As a result, we discovery that the new aggregated reward is much closer to 8000; the reward and average reward converge to 0.8 more quickly.

Upper Confidence Bound (UCB) Algorithm

Main idea:

$\ \ \ $First, we pull each arm once for initialization.

$\ \ \ $For $t = 4,\dots, N$, we choose the arm with highest Upper Confidence Bound (UCB).

where $\hat{\theta}(j)$ is the average reward observed from arm $j$ , $c$ is a constanst, $t$ is the number of attempts so far, $count(j)$ is the number of times arm $j$ has been pulled. In this formula, the first part is the exploitation term and the second part is the exploration term.

Pseudocode:
UCB_psedo.png

Code and Results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
import numpy as np 
import matplotlib.pyplot as plt
import random
import math
from scipy.stats import bernoulli
from matplotlib import style

for o_ in range(3):
N = 10000
result = []
theta_j = [0.4, 0.6, 0.8]
temp = [];temp2 = []
a0=[]; a1=[]; a2=[]
c = eval(input("c:"))
for _ in range(100): ###in total 100 repeats
##initialize
exp = 0
reward = []; avg = []
r = []
arm0 = [-1]*N; arm1 = [-1]*N; arm2 = [-1]*N
count_j = [1,1,1]
HTHETA_J = [0.0, 0.0, 0.0]
argm = [0,0,0]
for i in range(3):
r.append(list(bernoulli.rvs(size = 1, p = theta_j[i])))
HTHETA_J[i] = r[i][0]
if 0==i:
arm0[0] = r[i][0]
elif 1==i:
arm1[1] = r[i][0]
else:
arm2[2] = r[i][0]
reward = [r[0][0],r[1][0],r[2][0]]
r[0] = r[0]+[0,0]
r[1] = [0] + r[1] + [0]
r[2] = [0,0]+r[2]
avg = [reward[0]/1,(reward[0]+reward[1])/2,(reward[0]+reward[1]+reward[2])/3]

##UCB
for t in range(3,N):
for j in range(3):
argm[j] = HTHETA_J[j] + c*math.sqrt(2*math.log(t)/count_j[j])
r[j].append(int(bernoulli.rvs(size = 1,p = theta_j[j])))
index = argm.index(max(argm[0],argm[1],argm[2]))
count_j[index] += 1
HTHETA_J[index] = HTHETA_J[index] + 1/(count_j[index])*(r[index][-1]- HTHETA_J[index])
reward.append(r[index][-1])
if 1 == r[index][-1]:
exp += 1
if 0==index:
arm0[t] = r[index][-1]
elif 1==index:
arm1[t] = r[index][-1]
else:
arm2[t] = r[index][-1]
avg.append(exp/t)

result.append(exp)
temp.append(reward); temp2.append(avg)
a0.append(arm0); a1.append(arm1); a2.append(arm2)

reward = []; avg = []
for i in range(N):
t = 0
for j in range(len(temp)):
t += temp[j][i]
reward.append(t/100)

for i in range(N):
t = 0
for j in range(len(temp2)):
t += temp2[j][i]
avg.append(t/100)

arm0=[]; amr1=[]; arm2=[]
for i in range(N):
t0=0; t1=0; t2=0
for j in range(100):
t0 += a0[j][i]; t1 += a1[j][i]; t2 += a2[j][i]
arm0.append(t0/100); amr1.append(t1/100); arm2.append(t2/100)

print("mean=",np.mean(result))
style.use('ggplot')

x = []
for i in list(range(N)):
x.append(math.log10(i+1))

fig1= plt.subplots()
plt.title("UCB")
plt.xlabel('lg(N)')
plt.plot(x,reward,color = "orange",label="reward")
plt.plot(x,avg,color = "grey",label="average reward")
plt.legend()
plt.show()

fig2 = plt.subplots()
plt.xlabel('lg(N)')
plt.title("UCB")
plt.plot(x,arm0, color = "#df405a", label = "arm1", alpha = 0.8)
plt.plot(x,amr1, color = "#3b8686",label = "arm2", alpha = 0.8)
plt.plot(x,arm2, color = "#0080ff", label = "arm3", alpha = 0.8)
plt.legend()
plt.show()
c:1
mean= 7892.49
c:5
mean= 7148.43
c:10
mean= 6676.07

According to the simulation:

  • Each time the best arm is selected, $\hat{\theta}(j)$ would increase, however, $count(j)$ also increases and cause the second term to decrease. On the other hand, each time an action other than arm $j$ is selected, $t$ increases and the whole term increase, which cause the arm more likely to be choosen next time.
  • The first term indicates the exploitation phase, and the second term indicates the exploration phase. Very silimar to the $\epsilon$ - Greedy Algorithm, if we set $c$ to be larger, then we do more exploration even $t$ is relatively large, and it would be very difficlut for the figure to converge to 0.8.

Improvement of UCB Algorithm

$\ \ \ $We design an improved UCB algorithm to get better performance. The motivation is that — when $t$ is small, we set large $c$ to explore and obtain information quickly; when $t$ is large, we set small $c$ and exploit the information we already have. Shown as the follows, we set $c = \frac{1}{\log(t)}$. We remain the $N=10000$ and repeat the experiments for 100 times just as previous experiments.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
import matplotlib.pyplot as plt 
import random
import math
import numpy as np
from scipy.stats import bernoulli
from matplotlib import style

for o_ in range(1):
N = 10000
result = []
theta_j = [0.4, 0.6, 0.8]
result = []
temp = []
temp2 = []
for _ in range(100): ###in total 100 repeats
##initialize
exp = 0
avg = []
r = []
count_j = [1,1,1]
HTHETA_J = [0.0, 0.0, 0.0]
argm = [0,0,0]
for i in range(3):
r.append(list(bernoulli.rvs(size = 1, p = theta_j[i])))
HTHETA_J[i] = r[i][0]
reward = [r[0][0],r[1][0],r[2][0]]
avg = [reward[0]/1,reward[1]/2,reward[2]/3]

##UCB
for t in range(4,N+1):
c = 2/(math.log(t))
for j in range(3):
argm[j] = HTHETA_J[j] + c*math.sqrt(2*math.log(t)/count_j[j])
r[j].append(int(bernoulli.rvs(size = 1,p = theta_j[j])))
index = argm.index(max(argm[0],argm[1],argm[2]))
count_j[index] += 1
HTHETA_J[index] = HTHETA_J[index] + 1/(count_j[index])*(r[index][-1]- HTHETA_J[index])
reward.append(r[index][-1])
if 1 == r[index][-1]:
exp += 1
avg.append(exp/t)

result.append(exp)
temp.append(reward)
temp2.append(avg)
reward = []
avg = []
for i in range(N):
t = 0
for j in range(len(temp)):
t += temp[j][i]
reward.append(t/100)

for i in range(N):
t = 0
for j in range(len(temp2)):
t += temp2[j][i]
avg.append(t/100)

print("mean=",np.mean(result))
style.use('ggplot')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(result, color = "wheat", edgecolor = "orange")
plt.xlabel("Expectation")
plt.ylabel("Frequency")
plt.title("UCB")
#plot the reward and average reward
x = []
for i in list(range(N)):
x.append(math.log10(i+1))

fig,ax = plt.subplots()
plt.xlabel('log(N)')
plt.plot(x,reward,color = "lightsalmon", label="reward")
plt.plot(x,avg,color = "grey",label="average reward")
plt.legend()
plt.show()
mean= 7965.27

  On average, we get better performance.
  However, we also notice that, for some cases, the total reward is relatively small, the UCB algorithm is quite unstable and do not have a excellent lower bound.

Thompson Sampling Algorithm

Main idea:
  We notice that in the UCB algorithm, we actually do not use the information of probability distribution.
  The next algorithm, however, we take full advantage of the information that each arm generates a Bernoulli distributed reward.
  In Thompson Sampling algorithm, we define a prior probability distribution for each arm, $\hat{\theta}(j)\sim\text{Beta}(\alpha_j,\beta_j)$. In the experiment, for arm $j$, if we observe there are $k$ success out of $m$ attempts, then we update the probability, i.e., $\hat{\theta}(j)\sim\text{Beta}(\alpha_j+k, \beta_j+n-k)$.
  The proof is shown as follows.

  In the simulation, we first pull each arm once as initialization. For $t = 4,\dots, N$, we always pull the arm with highest $\hat{\theta}$.

Pseudocode:
TS_psedo.png

Code and Results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
import numpy as np 
import matplotlib.pyplot as plt
import random
import math
from scipy.stats import bernoulli
from scipy.stats import beta
from matplotlib import style

N = 10000
theta = [0.4, 0.6, 0.8]
for oo in range(2):
#initialize
result = []
temp = []; temp2 = []
a = []; b = []
a0=[]; a1=[]; a2=[]
a.append(eval(input("alpha1 = ")))
b.append(eval(input("beta1 = ")))
a.append(eval(input("alpha2 = ")))
b.append(eval(input("beta2 = ")))
a.append(eval(input("alpha3 = ")))
b.append(eval(input("beta3 = ")))
for o in range(100):
exp = 0; aa=[0,0,0]; bb=[0,0,0]
for i in range(len(a)):
aa[i] = a[i]; bb[i] = b[i]
reward = []; avg = []
arm0 = [-1]*N; arm1 = [-1]*N; arm2 = [-1]*N
for t in range(N): ##对每一个time slot
HTHETA_j = []
#sample model
for j in range(3):
HTHETA_j.append(float(beta.rvs(aa[j],bb[j],size = 1)))
index = HTHETA_j.index(max(HTHETA_j[0],HTHETA_j[1],HTHETA_j[2]))
r = int(bernoulli.rvs(size = 1,p = theta[index]))
if 1 == r:
exp += 1
reward.append(r)
if 0==index:
arm0[t] = r
elif 1==index:
arm1[t] = r
else:
arm2[t] = r
avg.append(exp/(t+1))
aa[index] = aa[index] + r;bb[index] = bb[index] + 1-r
result.append(exp)
temp.append(reward);temp2.append(avg)
a0.append(arm0); a1.append(arm1); a2.append(arm2)

reward = []; avg = []
for i in range(N):
t = 0
for j in range(len(temp)):
t += temp[j][i]
reward.append(t/100)

for i in range(N):
t = 0
for j in range(len(temp2)):
t += temp2[j][i]
avg.append(t/100)

arm0=[]; amr1=[]; arm2=[]
for i in range(N):
t0=0; t1=0; t2=0
for j in range(100):
t0 += a0[j][i]; t1 += a1[j][i]; t2 += a2[j][i]
arm0.append(t0/100); amr1.append(t1/100); arm2.append(t2/100)


print("mean=",np.mean(result))
style.use('ggplot')

x = []
for i in list(range(N)):
x.append(math.log10(i+1))

fig1= plt.subplots()
plt.title("Thompson Sampling")
plt.xlabel('lg(N)')
plt.plot(x,reward,color = "#A593E0",label="reward")
plt.plot(x,avg,color = "grey",label="average reward")
plt.legend()
plt.show()

#plt.sca(fig2)
fig2 = plt.subplots()
plt.title("Thompson Sampling")
plt.xlabel('lg(N)')
plt.plot(x,arm0, color = "#df405a", label = "arm1", alpha = 0.8)
plt.plot(x,amr1, color = "#3b8686",label = "arm2", alpha = 0.8)
plt.plot(x,arm2, color = "#0080ff", label = "arm3", alpha = 0.8)
plt.legend()
plt.show()
alpha1 = 1
beta1 = 1
alpha2 = 1
beta2 = 1
alpha3 = 1
beta3 = 1
mean= 7988.33
alpha1 = 2
beta1 = 4
alpha2 = 3
beta2 = 6
alpha3 = 1
beta3 = 2
mean= 7988.64

According to the simulation:

  • Different from $\epsilon$ - Greedy and UCB algorithm, as $t\rightarrow \infty$, the probability of chosen arm 1 and arm 2 almost converge to 0.
  • We notice that the prior distribution only makes small difference to the final reward. However, if the prior probability is more closer to the true probability, it would take less time to find out which arm is the best.

Summary

The overall reults is shown as follows:
a.png

Dependent Cases

  In the previous situation, we assume the reward distribution of three arms are independent. Here, we provide a method to analyze the dependent multi-armed bandit problems.

Models

  To generalize the problem, we assume that there is a machine with $N$ arms that are grouped into $K$ clusters. Every arm in a single cluster are dependent, but arms in different clusters are independent of each other. Each arm $i$ has a fixed but unknown probability $\theta(i)$ of generating a success reward (i.e. $r_i = 1$). Let $[i]$ denotes the whole arm cluster which contains the $i$th arm, and define $C_{[i]}$ be the set of all arms in that cluster including the $i$th arm.
  Define $s_i(t)$ be the number of times arm $j$ successfully generates a reward in $t$ time slot, $f_i(t)$ be the number of times that arm $i$ fails to generate a reward, $\varphi(h_{[i]})$ be the prior probability distribution and function $h_{[i]}$ abstract out the dependence of arms on each other in one cluster.
  Therefore, we have that

  In each timeslot, we solve the problem in two steps:

1. Selection step: Apply a specific policy to choose the arm to pull.
2. Update step: Use the result of the arm pull to update the information.

  We should notice the difference between independent and the dependent cases. In a independent situation, once arm $j$ is pulled, only the information of arm $i$ is updated. However, in the dependent cases, once an arm $i$ is chosen, the state of all arms in $C_{[i]}$ is changed.

Method

  In each time slot, for each cluster $i$, we computer the reward estimate $\hat{\theta}(i)$ and the corresponding estimate variance $\hat{\sigma}(i)$ of each cluster. After that, we use a specfic policy (such as UCB, Thompson Sampling, etc.) to select a cluster. Finally, we use the policy to choose the “best” arm in that cluster. In this method, a good estimate reward should be accurate and converge quickly (i.e., $\hat{\sigma}(j)\to 0$ quickly).
 We have two strategies shown as follows.

  • Mean Strategy:
    • For all arm $j$ in cluster $C_j$, according to the Binomial distribution, we set the cluster characteristics as follows:Here, the $s_j, f_j$ are posterior values.
    • However, we notice that in the mean strategy, if there is one “bad” arm in a cluster, then the total estimate reward is dragged down. Therefore, it might be difficult to find the best rewarded arm if its siblings perform terribly. So, we design another algorithm shown as follows.
  • Max Strategy:
    • In this strategy, each cluster is represented by the arm that is currently best in it. That is:
    • Compared to the mean strategy, this method would be closer to maximum success probability of cluster $j$. Because $\hat{\theta}(j)$ is not dragged down by its families.
    • However, this method neglects all observations of other arms in that cluster and does not take full advantage of all current information.

Reference

  1. Slivkins, A. (2019). Introduction to multi-armed bandits. Foundations and Trends in Machine Learning. https://doi.org/10.1561/2200000068
  2. Pandey, Sandeep & Chakrabarti, Deepayan & Agarwal, Deepak. (2007). Multi-armed bandit problems with dependent arms.. 721-728.
  3. Connor Lee, Hoang Le, R. M. (2016). Multi-armed Bandits. Retrieved from http://www.yisongyue.com/courses/cs159/