1 简介



2 部分代码
classdef DQNEstimator < handleproperties (SetAccess = private)env;alpha;weights;hidden_layer;endmethodsfunction obj = DQNEstimator(env,alpha,hidden_layer)obj.env = env;obj.alpha = alpha;obj.hidden_layer = hidden_layer;obj.weights.input = normrnd(0,1,[env.complexFeaturesLen+1, hidden_layer(1)])/sqrt(obj.env.complexFeaturesLen);obj.weights.hidden = normrnd(0,1,[hidden_layer(1)+1, hidden_layer(2)])/sqrt(hidden_layer(1));obj.weights.out = normrnd(0,1,[hidden_layer(2)+1, length(obj.env.actionSpace)])/sqrt(hidden_layer(2));endfunction set_weights(obj,weights)obj.weights = weights;endfunction value = predict(obj,state)features = obj.env.get_complex_state_features(state);%features are already scaled.value.hidden_in_value = [1 features] * obj.weights.input;value.hidden_out_value = sigmoid(value.hidden_in_value);%activation functionvalue.hidden_in_value2 = [1 value.hidden_out_value] * obj.weights.hidden;value.hidden_out_value2 = sigmoid(value.hidden_in_value2);%activation functionvalue.out_value = [1 value.hidden_out_value2] * obj.weights.out;endfunction update(obj,state,action,target)features = [1 obj.env.get_complex_state_features(state)];value = obj.predict(state);out_value = value.out_value(action);hidden_out_value2 = value.hidden_out_value2;hidden_out_value = value.hidden_out_value;derivative_in(length(features), obj.hidden_layer(1)) = 0;for i=1:obj.hidden_layer(1)derivative_in(:,i) = (out_value - target) * ...sum(obj.weights.out(2:end,action)' .* ...(hidden_out_value2.*(1-hidden_out_value2)) .* ...obj.weights.hidden(i+1,:)) * ...hidden_out_value(i) * (1-hidden_out_value(i)) * features;obj.weights.input(:,i) = obj.weights.input(:,i) - obj.alpha * derivative_in(:,i);endderivative_hidden(obj.hidden_layer(2)+1, obj.hidden_layer(2)) = 0;for i=1:obj.hidden_layer(2)derivative_hidden(:,i) = (out_value - target) * obj.weights.out(i+1) * ...hidden_out_value2(i) * (1-hidden_out_value2(i)) * [1 hidden_out_value];obj.weights.hidden(:,i) = obj.weights.hidden(:,i) - obj.alpha * derivative_hidden(:,i);endderivative_out(:,1) = (out_value- target) * [1 hidden_out_value2];obj.weights.out(:,action) = obj.weights.out(:,action) - obj.alpha * derivative_out;endendend
3 仿真结果

4 参考文献
[1]王菁华, 崔世钢, 罗云林. 基于Matlab的智能机器人路径规划仿真[C]// '2008系统仿真技术及其应用学术会议论文集. 2008.
博主简介:擅长智能优化算法、神经网络预测、信号处理、元胞自动机、图像处理、路径规划、无人机等多种领域的Matlab仿真,相关matlab代码问题可私信交流。
部分理论引用网络文献,若有侵权联系博主删除。
本文介绍了一个使用Matlab实现的深度Q学习(DQN)估算器。该估算器包括输入、隐藏和输出层的权重初始化,并采用sigmoid激活函数。更新规则涉及梯度下降,用于优化网络权重。博主擅长智能优化算法和神经网络,并提供相关Matlab代码咨询。
875

被折叠的 条评论
为什么被折叠?



