ql

import numpy as np

Q 矩阵初始化为0

q = np.matrix(np.zeros([6, 6]))

Reward 矩阵为提前定义好的。 类似与HMM的生成矩阵。-1表示无相连接的边

r = np.matrix([[-1, -1, -1, -1, 0, -1],
[-1, -1, -1, 0, -1, 100],
[-1, -1, -1, 0, -1, -1],
[-1, 0, 0, -1, 0, -1],
[ 0, -1, -1, 0, -1, 100],
[-1, 0, -1, -1, 0, 100]])

hyperparameter

#折扣因子
gamma = 0.8
#是否选择最后策略的概率
epsilon = 0.4

the main training loop

for episode in range(101):
# random initial state
state = np.random.randint(0, 6)
# 如果不是最终转态
while (state != 5):
# 选择可能的动作
# Even in random case, we cannot choose actions whose r[state, action] = -1.
possible_actions = []
possible_q = []
for action in range(6):
if r[state, action] >= 0:
possible_actions.append(action)
possible_q.append(q[state, action])

    # Step next state, here we use epsilon-greedy algorithm.
    action = -1
    if np.random.random() < epsilon:
        # choose random action
        action = possible_actions[np.random.randint(0, len(possible_actions))]
    else:
        # greedy
        action = possible_actions[np.argmax(possible_q)]

    # Update Q value
    q[state, action] = r[state, action] + gamma * q[action].max()

    # Go to the next state
    state = action

# Display training progress
if episode % 10 == 0:
    print("------------------------------------------------")
    print("Training episode: %d" % episode)
    print(q)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值