您现在的位置:首页 >> 品牌家居

YYDS!Python充分利用自动驾驶

发布时间:2025/08/30 12:17    来源:翔安家居装修网

用同一个reward function:

这个function勉强在其软件包里更改,在内外层勉强调整权重。(泊车情景的reward function原文档里有,懒得打不等式了……)

2、搭建模标准型

DQN局域网的结构设计和搭建过程已经在我另一篇文章里讨论过,所以这里不再详细解释。我采用第一种state坚称方式——Kinematics进行;还有。

由于state原始数据量较小(5辆车*7个不同之处),可以不再考虑使用CNN,直接把二维原始数据的size[5,7]投成[1,35]需,模标准型的读取就是35,输出是均值action需求量,共5个。

import torchimport torch.nn as nnfrom torch.autograd import Variableimport torch.nn.functional as Fimport torch.optim as optimimport torchvision.transforms as Tfrom torch import FloatTensor, LongTensor, ByteTensorfrom collections import namedtupleimport random Tensor = FloatTensorEPSILON = 0 # epsilon used for epsilon greedy approachGAMMA = 0.9TARGET_NETWORK_REPLACE_FREQ = 40 # How frequently target netowrk updatesMEMORY_CAPACITY = 100BATCH_SIZE = 80LR = 0.01 # learning rateclass DQNNet(nn.Module): def 紧接init紧接(self): super(DQNNet,self).紧接init紧接() self.linear1 = nn.Linear(35,35) self.linear2 = nn.Linear(35,5) def forward(self,s): s=torch.FloatTensor(s) s = s.view(s.size(0),1,35) s = self.linear1(s) s = self.linear2(s) return s class DQN(object): def 紧接init紧接(self): self.net,self.target_net = DQNNet(),DQNNet() self.learn_step_counter = 0 self.memory = [] self.position = 0 self.capacity = MEMORY_CAPACITY self.optimizer = torch.optim.Adam(self.net.parameters(), lr=LR) self.loss_func = nn.MSELoss() def choose_action(self,s,e): x=np.expand_dims(s, axis=0) if np.random.uniform() < 1-e: actions_value = self.net.forward(x) action = torch.max(actions_value,-1)[1].data.numpy() action = action.max() else: action = np.random.randint(0, 5) return action def push_memory(self, s, a, r, s_): if len(self.memory) < self.capacity: self.memory.append(None) self.memory[self.position] = Transition(torch.unsqueeze(torch.FloatTensor(s), 0),torch.unsqueeze(torch.FloatTensor(s_), 0), torch.from_numpy(np.array([a])),torch.from_numpy(np.array([r],dtype='float32')))# self.position = (self.position + 1) % self.capacity def get_sample(self,batch_size): sample = random.sample(self.memory,batch_size) return sample def learn(self): if self.learn_step_counter % TARGET_NETWORK_REPLACE_FREQ == 0: self.target_net.load_state_dict(self.net.state_dict()) self.learn_step_counter += 1 transitions = self.get_sample(BATCH_SIZE) batch = Transition(*zip(*transitions)) b_s = Variable(torch.cat(batch.state)) b_s_ = Variable(torch.cat(batch.next_state)) b_a = Variable(torch.cat(batch.action)) b_r = Variable(torch.cat(batch.reward)) q_eval = self.net.forward(b_s).squeeze(1).gather(1,b_a.unsqueeze(1).to(torch.int64)) q_next = self.target_net.forward(b_s_).detach() # q_target = b_r + GAMMA * q_next.squeeze(1).max(1)[0].view(BATCH_SIZE, 1).t() loss = self.loss_func(q_eval, q_target.t()) self.optimizer.zero_grad() # reset the gradient to zero loss.backward() self.optimizer.step() # execute back propagation for one step return lossTransition = namedtuple('Transition',('state', 'next_state','action', 'reward'))3、运行结果

各个部分都进行后来就可以组合在独自一人体能训练模标准型了,工序和用CARLA差不多,就不天长地久了。

初始化环境(DQN的类加进去就行了):

import gymimport highway_envfrom matplotlib import pyplot as pltimport numpy as npimport timeconfig = { "observation": { "type": "Kinematics", "vehicles_count": 5, "features": ["presence", "x", "y", "vx", "vy", "cos_h", "sin_h"], "features_range": { "x": [-100, 100], "y": [-100, 100], "vx": [-20, 20], "vy": [-20, 20] }, "absolute": False, "order": "sorted" }, "simulation_frequency": 8, # [Hz] "policy_frequency": 2, # [Hz] } env = gym.make("highway-v0")env.configure(config)

体能训练模标准型:

dqn=DQN()count=0reward=[]avg_reward=0all_reward=[]time_=[]all_time=[]collision_his=[]all_collision=[]while True: done = False start_time=time.time() s = env.reset() while not done: e = np.exp(-count/300) #随机选择action的机率,随着体能训练次数增多随之降低 a = dqn.choose_action(s,e) s_, r, done, info = env.step(a) env.render() dqn.push_memory(s, a, r, s_) if ((dqn.position !=0)Wild(dqn.position % 99==0)): loss_=dqn.learn() count+=1 print('trained times:',count) if (count%40==0): avg_reward=np.mean(reward) avg_time=np.mean(time_) collision_rate=np.mean(collision_his) all_reward.append(avg_reward) all_time.append(avg_time) all_collision.append(collision_rate) plt.plot(all_reward) plt.show() plt.plot(all_time) plt.show() plt.plot(all_collision) plt.show() reward=[] time_=[] collision_his=[] s = s_ reward.append(r) end_time=time.time() episode_time=end_time-start_time time_.append(episode_time) is_collision=1 if info['crashed']==True else 0 collision_his.append(is_collision)

我在字符串里附加了一些画图的数组,在运行过程里就可以掌握一些极为重要的指标,每体能训练40次统计一次百分比。

平均交互作用比率:

epoch平均足足(s):

平均reward:

可以看出平均交互作用比率时会随体能训练次数增多随之降低,每个epoch持续的等待时间时会随之延长(如果发生交互作用epoch时会即刻完结)

四、总结

相比于我在之前文章里使用过的的系统CARLA,highway-env环境打包明显更加抽象化,用类似电子游戏的坚称方式,使得演算法可以在一个理想的虚拟环境里得到体能训练,而不用再考虑原始数据获取方式、感应器精度、运算足足等现实问题。对于端到端的演算法设计和测试比较友好,但从机械工程的角度来看,可以入手的层面较少,研究工作起来不太有效率。

眼睛疼怎么快速缓解
芳香中药的功效
海露滴眼液是德国产的吗
医院在线咨询
眼睛酸疼
咳嗽有黄痰可以吃什么药
感冒咳嗽黄痰吃什么药效果好
阳了吃什么药好得快

上一篇: 卢森堡小镇藏着“堪称最伟大影展”

下一篇: 《盈警响号》亚证地产(00271.HK)料全年度赤字扩大27倍

友情链接