12月12日 has_many through:的interference, option

本文深入解析了Rails中模型关联的方法,包括build和create的区别,以及has_many:through的使用技巧。探讨了不同选项如:dependent、:foreign_key等如何定制关联行为。
has_many :products, through: :cart_items, source: :product

build定义:collection.build(attributes = {}, …) 本例子中collection换成cart_items.

         说明:这个method返回一个或多个new objects of the associated type.但关联的对象尚未保存。需要save. 

Create定义:collection.create(attributes = {})

  说明: 只返回一个新建关联对象,并自动被save. 

source定义: 指定has_many :through关联的源关联name.只有无法从关联名中解出源关联的名称时才 需要设置

                            这个选项。

 说明:这是has_many中的option选项附加


 


参考:Active Record asscoiation reference 关联参考

http://guides.rubyonrails.org/v2.3.11/association_basics.html#has-one-association-reference 

4.3 has_many Association Reference

我的理解:一旦两个model建立一对多的关联,这个1的model自动赋予了13个methods来操作关联的各类动作。如build ,create等等。

例子: 

class Customer < ActiveRecord::Base

has_many :orders

end

 

collection(force_reload = false)  实例:@orders = @customer.orders
collection<<(object,...) 实例: @customer.orders << @order1

解释:增加一个或多个order对象. 如果用delete移除了某个对象,可以使用这个method增加回来,自动为这个对象的外键赋值,nil不再为空。

collection.delete(object...) 实例:@customer.orders.delete(@order1)
//简单解释:不删除只去掉关联

解释:delete,从@customer.orders移除一个或多个object,方法是通过把@order1的关联外键设为nil.这样再使用@customer.orders的时候,就不会调用已经移除的@order1,但Order数据库中仍然存在@order1.

collection.clear,是移除全部,和delete类似。

 

collection.empty? 解释:如果是空的则返回true

collection.exists?(...) 解释:根据(...)条件返回true或false,如果不加(...)根据是否有关联对象返回boolean值。 

colleciton.size  解释:returns the number of objects in the collection. 

 

colleciton.find(...) 实例:

@open_orders = @customer.orders.find(:all, :conditions => "open = 1")

解释:不明白怎么查找的??? 


4.3.2 Options for has_many

You can alter that behavior in a number of ways.For example, an association with several options might look like this:

class Customer < ActiveRecord::Base

has_many :orders, :dependent => :delete_all,

:validate => :false

end

 合计有22个选项options可进行customization.

 常用举例:

 

:validate, 如果为false,保存customer对象的时候,不验证关联的orders对象。 

:as, 设置别名,或者使用多态关联(还没有体会到方便的用途),见2.9 Polymorphic Associations

 

:autosave, 当保存父对象时,自动保存所以子对象,并把标记为destruction的删除

 

:class_name, 举例:如果一个customer有很多orders,但是实际包含orders的model的名字是Transaction,需要如下写法:

class Customer < ActiveRecord::Base has_many :orders, :class_name => "Transaction"

end
 

:source, ?️定has_many :through 即多对多关联的关联源名称,只有无法从关联名字中解出源关联名的时候才会用到source这个选项。class_name的用途一样,但作用范围有区别。 

 

:through, 用于建立多对多关联的方法之一。 

 

:conditions,  进行条件筛选。关联的对象,必须满足相应条件

:dependent => destroy,  删除关联的子对象,并在删除前recall子对象。

:dependent => delete_all, 直接删除。

:dependent => nullify, 设置子对象外键为null,不再关联。 

 

:foreign_key, by convention, Rails guesses that the column used to hold the foreign key on the other model is the name of this model with the _id added.  我的理解就是不按照rails约定的习惯,设定一个不是model名字的XX_id列,然后在has_many  :orders, :foreign_key => "XX_id",?️定这个column为外键。

 例子:class Customer < ActiveRecord::Base

has_many :orders, :foreign_key => "cust_id"

end


 :include, 用于嵌套的model关联,比如一个customer有很多orders,一个orders有很多lineitems,用:include声明后,可以直接使用@customer.orders.lineitems

 

:limit, 用于限定fetch的子对象的数量,例子:

class Customer < ActiveRecord::Base

has_many :recent_orders, :class_name => "Order", :order => "order_date DESC", :limit => 100

end

 

:order, 用于规定关联的子对象的排列顺序。 

 

:offset, 例子: :offset =>11, 调用的时候忽略第11个record

 

转载于:https://www.cnblogs.com/chentianwei/p/8026898.html

#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ 人形机器人仿真比赛系统 符合科目一比赛要求的完整仿真系统 包含:双足行走、动态避障、智能决策、能耗优化 """ import sys import os import time import json import numpy as np import matplotlib.pyplot as plt from typing import Dict, List, Tuple, Optional, Any import threading import argparse import mujoco import pygame from pygame.locals import * import math import cv2 from PIL import Image import io import traceback import subprocess import ctypes import glfw from enum import Enum from dataclasses import dataclass import logging # 设置志 logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') logger = logging.getLogger(__name__) # 设置环境变量 os.environ['MUJOCO_GL'] = 'glfw' class TaskType(Enum): """任务类型枚举""" WALKING = "walking" SLOPE_WALKING = "slope_walking" STAIR_CLIMBING = "stair_climbing" OBSTACLE_AVOIDANCE = "obstacle_avoidance" SAND_WALKING = "sand_walking" DYNAMIC_AVOIDANCE = "dynamic_avoidance" INTERFERENCE_RESPONSE = "interference_response" class TerrainType(Enum): """地形类型枚举""" FLAT = "flat" SLOPE = "slope" STAIRS = "stairs" SAND = "sand" OBSTACLE = "obstacle" class InterferenceType(Enum): """干扰类型枚举""" WIND = "wind" BRIGHT_LIGHT = "bright_light" NOISE = "noise" @dataclass class PerformanceMetrics: """性能指标数据类""" task_completion_rate: float = 0.0 energy_efficiency: float = 0.0 stability_score: float = 0.0 response_time: float = 0.0 obstacle_avoidance_rate: float = 0.0 terrain_adaptation_score: float = 0.0 class HumanoidRobot: """人形机器人核心类""" def __init__(self): self.model = None self.data = None self.viewer = None self.joint_names = [] self.sensor_names = [] self.control_gains = { 'kp': 100.0, 'ki': 10.0, 'kd': 20.0 } self.energy_consumption = 0.0 self.stability_metrics = { 'com_height': 0.0, 'com_velocity': 0.0, 'foot_contact': [False, False] } def load_model(self, model_path: str): """载机器人模型""" try: self.model = mujoco.MjModel.from_xml_path(model_path) self.data = mujoco.MjData(self.model) self.joint_names = [self.model.joint(i).name for i in range(self.model.njnt)] self.sensor_names = [self.model.sensor(i).name for i in range(self.model.nsensor)] logger.info(f"✅ 成功载模型: {model_path}") logger.info(f"关节数量: {self.model.njnt}") logger.info(f"传感器数量: {self.model.nsensor}") return True except Exception as e: logger.error(f"❌ 模型载失败: {e}") return False def get_joint_positions(self) -> np.ndarray: """获取关节位置""" return self.data.qpos[:self.model.nq].copy() def get_joint_velocities(self) -> np.ndarray: """获取关节速度""" return self.data.qvel[:self.model.nv].copy() def set_joint_targets(self, targets: np.ndarray): """设置关节目标位置""" if len(targets) == self.model.nu: self.data.ctrl[:] = targets def compute_pid_control(self, target_pos: np.ndarray, current_pos: np.ndarray, current_vel: np.ndarray, dt: float) -> np.ndarray: """PID控制器""" error = target_pos - current_pos integral = getattr(self, '_integral', np.zeros_like(error)) integral += error * dt derivative = (error - getattr(self, '_prev_error', np.zeros_like(error))) / dt control = (self.control_gains['kp'] * error + self.control_gains['ki'] * integral + self.control_gains['kd'] * derivative) self._integral = integral self._prev_error = error return control def update_stability_metrics(self): """更新稳定性指标""" # 计算质心高度 com_pos = self.data.xpos[self.model.body('torso').id] self.stability_metrics['com_height'] = com_pos[2] # 计算质心速度 com_vel = self.data.qvel[:3] self.stability_metrics['com_velocity'] = np.linalg.norm(com_vel) # 检测足底接触 for i, foot_name in enumerate(['left_foot', 'right_foot']): try: foot_id = self.model.body(foot_name).id contact_force = np.linalg.norm(self.data.xfrc_applied[foot_id]) self.stability_metrics['foot_contact'][i] = contact_force > 0.1 except: self.stability_metrics['foot_contact'][i] = False def step(self, dt: float): """仿真步进""" mujoco.mj_step(self.model, self.data) self.update_stability_metrics() # 计算能耗 joint_torques = np.abs(self.data.ctrl) self.energy_consumption += np.sum(joint_torques) * dt class IntelligentDecisionSystem: """智能决策系统""" def __init__(self): self.decision_weights = { 'safety': 0.4, 'efficiency': 0.3, 'energy': 0.2, 'comfort': 0.1 } self.current_task = TaskType.WALKING self.task_priority = { TaskType.INTERFERENCE_RESPONSE: 1, TaskType.DYNAMIC_AVOIDANCE: 2, TaskType.OBSTACLE_AVOIDANCE: 3, TaskType.STAIR_CLIMBING: 4, TaskType.SLOPE_WALKING: 5, TaskType.SAND_WALKING: 6, TaskType.WALKING: 7 } def evaluate_environment(self, robot_state: Dict, env_state: Dict) -> Dict[str, float]: """评估环境状态""" scores = {} # 安全性评估 safety_score = 1.0 if 'obstacles' in env_state: min_distance = min([obs['distance'] for obs in env_state['obstacles']], default=float('inf')) safety_score = min(1.0, min_distance / 2.0) scores['safety'] = safety_score # 效率评估 terrain_type = env_state.get('terrain_type', TerrainType.FLAT) terrain_efficiency = { TerrainType.FLAT: 1.0, TerrainType.SLOPE: 0.8, TerrainType.STAIRS: 0.6, TerrainType.SAND: 0.7, TerrainType.OBSTACLE: 0.5 } scores['efficiency'] = terrain_efficiency.get(terrain_type, 1.0) # 能耗评估 energy_score = 1.0 - (robot_state.get('energy_consumption', 0.0) / 1000.0) scores['energy'] = max(0.0, energy_score) # 舒适度评估 interference_level = env_state.get('interference_level', 0.0) comfort_score = 1.0 - interference_level scores['comfort'] = max(0.0, comfort_score) return scores def make_decision(self, robot_state: Dict, env_state: Dict) -> TaskType: """智能决策""" # 环境评估 env_scores = self.evaluate_environment(robot_state, env_state) # 综合得分计算 total_score = sum(self.decision_weights[key] * env_scores[key] for key in self.decision_weights) # 基于规则的决策 if env_scores['safety'] < 0.3: return TaskType.INTERFERENCE_RESPONSE elif env_state.get('interference_level', 0.0) > 0.7: return TaskType.INTERFERENCE_RESPONSE elif len(env_state.get('obstacles', [])) > 0: return TaskType.DYNAMIC_AVOIDANCE elif env_state.get('terrain_type') == TerrainType.STAIRS: return TaskType.STAIR_CLIMBING elif env_state.get('terrain_type') == TerrainType.SLOPE: return TaskType.SLOPE_WALKING elif env_state.get('terrain_type') == TerrainType.SAND: return TaskType.SAND_WALKING elif total_score > 0.7: return TaskType.WALKING else: return TaskType.WALKING class PathPlanner: """路径规划器""" def __init__(self): self.grid_size = 0.1 self.obstacle_margin = 0.3 def a_star_pathfinding(self, start: np.ndarray, goal: np.ndarray, obstacles: List[Dict], terrain_map: Dict) -> List[np.ndarray]: """A*路径规划算法""" # 简化的A*实现 path = [start] current = start while np.linalg.norm(current - goal) > 0.1: # 计算到目标的方向 direction = (goal - current) / np.linalg.norm(goal - current) # 检查是否有障碍物 next_pos = current + direction * self.grid_size # 避障逻辑 for obstacle in obstacles: if np.linalg.norm(next_pos - obstacle['position']) < obstacle['radius'] + self.obstacle_margin: # 计算避障方向 avoid_direction = self._calculate_avoid_direction(current, obstacle) next_pos = current + avoid_direction * self.grid_size break path.append(next_pos) current = next_pos # 防止无限循环 if len(path) > 1000: break return path def _calculate_avoid_direction(self, robot_pos: np.ndarray, obstacle: Dict) -> np.ndarray: """计算避障方向""" to_obstacle = obstacle['position'] - robot_pos perpendicular = np.array([-to_obstacle[1], to_obstacle[0], 0]) return perpendicular / np.linalg.norm(perpendicular) class EnergyOptimizer: """能耗优化器""" def __init__(self): self.energy_history = [] self.optimization_strategies = { 'gait_optimization': True, 'trajectory_smoothing': True, 'adaptive_control': True } def optimize_gait(self, robot_state: Dict) -> Dict[str, float]: """步态优化""" # 基于当前状态的步态参数优化 com_height = robot_state.get('com_height', 0.8) com_velocity = robot_state.get('com_velocity', 0.0) # 动态调整步长和步频 optimal_step_length = min(0.3, com_velocity * 0.5) optimal_step_frequency = max(1.0, com_velocity * 2.0) return { 'step_length': optimal_step_length, 'step_frequency': optimal_step_frequency, 'com_height': com_height } def smooth_trajectory(self, trajectory: List[np.ndarray]) -> List[np.ndarray]: """轨迹平滑""" if len(trajectory) < 3: return trajectory smoothed = [trajectory[0]] for i in range(1, len(trajectory) - 1): # 简单的移动平均平滑 smoothed_point = (trajectory[i - 1] + trajectory[i] + trajectory[i + 1]) / 3 smoothed.append(smoothed_point) smoothed.append(trajectory[-1]) return smoothed def update_energy_history(self, energy_consumption: float): """更新能耗历史""" self.energy_history.append(energy_consumption) if len(self.energy_history) > 1000: self.energy_history.pop(0) class CompetitionSimulation: """比赛仿真主类""" def __init__(self): self.robot = HumanoidRobot() self.decision_system = IntelligentDecisionSystem() self.path_planner = PathPlanner() self.energy_optimizer = EnergyOptimizer() self.performance_metrics = PerformanceMetrics() # 仿真参数 self.dt = 0.01 self.simulation_time = 0.0 self.max_simulation_time = 60.0 # 60秒仿真 # 场景状态 self.current_scenario = "comprehensive" self.scenario_state = {} self.task_history = [] # 性能记录 self.performance_data = { 'time': [], 'energy': [], 'stability': [], 'task_completion': [] } def setup_scenario(self, scenario_type: str = "comprehensive"): """设置仿真场景""" self.current_scenario = scenario_type if scenario_type == "comprehensive": self._setup_comprehensive_scenario() elif scenario_type == "basic_tasks": self._setup_basic_tasks_scenario() elif scenario_type == "advanced_tasks": self._setup_advanced_tasks_scenario() else: self._setup_comprehensive_scenario() def _setup_comprehensive_scenario(self): """设置综合场景""" self.scenario_state = { 'terrain_sequence': [ {'type': TerrainType.FLAT, 'duration': 10.0}, {'type': TerrainType.SLOPE, 'duration': 15.0}, {'type': TerrainType.STAIRS, 'duration': 20.0}, {'type': TerrainType.SAND, 'duration': 10.0}, {'type': TerrainType.FLAT, 'duration': 5.0} ], 'obstacles': [ {'position': np.array([2.0, 0.0, 0.0]), 'radius': 0.3, 'type': 'static'}, {'position': np.array([4.0, 1.0, 0.0]), 'radius': 0.2, 'type': 'dynamic'}, {'position': np.array([6.0, -1.0, 0.0]), 'radius': 0.4, 'type': 'static'} ], 'interferences': [ {'type': InterferenceType.WIND, 'start_time': 25.0, 'duration': 5.0}, {'type': InterferenceType.BRIGHT_LIGHT, 'start_time': 35.0, 'duration': 3.0} ], 'current_terrain_index': 0, 'current_terrain_time': 0.0 } def _setup_basic_tasks_scenario(self): """设置基础任务场景""" self.scenario_state = { 'terrain_sequence': [ {'type': TerrainType.FLAT, 'duration': 20.0}, {'type': TerrainType.SLOPE, 'duration': 20.0}, {'type': TerrainType.STAIRS, 'duration': 20.0} ], 'obstacles': [], 'interferences': [], 'current_terrain_index': 0, 'current_terrain_time': 0.0 } def _setup_advanced_tasks_scenario(self): """设置进阶任务场景""" self.scenario_state = { 'terrain_sequence': [ {'type': TerrainType.FLAT, 'duration': 10.0}, {'type': TerrainType.SAND, 'duration': 15.0}, {'type': TerrainType.FLAT, 'duration': 10.0} ], 'obstacles': [ {'position': np.array([2.0, 0.0, 0.0]), 'radius': 0.3, 'type': 'static'}, {'position': np.array([4.0, 0.5, 0.0]), 'radius': 0.2, 'type': 'dynamic'}, {'position': np.array([6.0, -0.5, 0.0]), 'radius': 0.3, 'type': 'dynamic'} ], 'interferences': [ {'type': InterferenceType.WIND, 'start_time': 15.0, 'duration': 5.0}, {'type': InterferenceType.BRIGHT_LIGHT, 'start_time': 25.0, 'duration': 5.0} ], 'current_terrain_index': 0, 'current_terrain_time': 0.0 } def update_scenario_state(self): """更新场景状态""" # 更新地形 terrain_info = self.scenario_state['terrain_sequence'][self.scenario_state['current_terrain_index']] self.scenario_state['current_terrain_time'] += self.dt if self.scenario_state['current_terrain_time'] >= terrain_info['duration']: self.scenario_state['current_terrain_index'] += 1 self.scenario_state['current_terrain_time'] = 0.0 if self.scenario_state['current_terrain_index'] >= len(self.scenario_state['terrain_sequence']): self.scenario_state['current_terrain_index'] = 0 # 更新障碍物位置(动态障碍物) for obstacle in self.scenario_state['obstacles']: if obstacle['type'] == 'dynamic': # 简单的圆周运动 angle = self.simulation_time * 0.5 radius = 0.5 obstacle['position'][0] = obstacle['position'][0] + np.cos(angle) * 0.01 obstacle['position'][1] = obstacle['position'][1] + np.sin(angle) * 0.01 # 更新干扰 current_interference_level = 0.0 for interference in self.scenario_state['interferences']: if (interference['start_time'] <= self.simulation_time <= interference['start_time'] + interference['duration']): current_interference_level = 0.8 self.scenario_state['current_terrain'] = terrain_info['type'] self.scenario_state['interference_level'] = current_interference_level def get_robot_state(self) -> Dict: """获取机器人状态""" return { 'joint_positions': self.robot.get_joint_positions(), 'joint_velocities': self.robot.get_joint_velocities(), 'com_height': self.robot.stability_metrics['com_height'], 'com_velocity': self.robot.stability_metrics['com_velocity'], 'foot_contact': self.robot.stability_metrics['foot_contact'], 'energy_consumption': self.robot.energy_consumption } def get_environment_state(self) -> Dict: """获取环境状态""" # 计算机器人到障碍物的距离 robot_pos = self.robot.data.xpos[self.robot.model.body('torso').id] obstacles_with_distance = [] for obstacle in self.scenario_state['obstacles']: distance = np.linalg.norm(robot_pos - obstacle['position']) obstacle_with_distance = obstacle.copy() obstacle_with_distance['distance'] = distance obstacles_with_distance.append(obstacle_with_distance) return { 'terrain_type': self.scenario_state['current_terrain'], 'obstacles': obstacles_with_distance, 'interference_level': self.scenario_state['interference_level'], 'simulation_time': self.simulation_time } def execute_task(self, task_type: TaskType): """执行指定任务""" robot_state = self.get_robot_state() env_state = self.get_environment_state() if task_type == TaskType.WALKING: self._execute_walking_task(robot_state, env_state) elif task_type == TaskType.SLOPE_WALKING: self._execute_slope_walking_task(robot_state, env_state) elif task_type == TaskType.STAIR_CLIMBING: self._execute_stair_climbing_task(robot_state, env_state) elif task_type == TaskType.OBSTACLE_AVOIDANCE: self._execute_obstacle_avoidance_task(robot_state, env_state) elif task_type == TaskType.SAND_WALKING: self._execute_sand_walking_task(robot_state, env_state) elif task_type == TaskType.DYNAMIC_AVOIDANCE: self._execute_dynamic_avoidance_task(robot_state, env_state) elif task_type == TaskType.INTERFERENCE_RESPONSE: self._execute_interference_response_task(robot_state, env_state) def _execute_walking_task(self, robot_state: Dict, env_state: Dict): """执行行走任务""" # 基础双足行走控制 target_positions = self._generate_walking_trajectory(robot_state) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) def _execute_slope_walking_task(self, robot_state: Dict, env_state: Dict): """执行斜坡行走任务""" # 调整步态以适应斜坡 gait_params = self.energy_optimizer.optimize_gait(robot_state) target_positions = self._generate_slope_walking_trajectory(robot_state, gait_params) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) def _execute_stair_climbing_task(self, robot_state: Dict, env_state: Dict): """执行楼梯攀爬任务""" # 楼梯攀爬控制策略 target_positions = self._generate_stair_climbing_trajectory(robot_state) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) def _execute_obstacle_avoidance_task(self, robot_state: Dict, env_state: Dict): """执行障碍物避障任务""" # 路径规划避障 robot_pos = self.robot.data.xpos[self.robot.model.body('torso').id] goal_pos = robot_pos + np.array([1.0, 0.0, 0.0]) # 前进目标 path = self.path_planner.a_star_pathfinding( robot_pos, goal_pos, env_state['obstacles'], env_state ) if len(path) > 1: next_target = path[1] target_positions = self._generate_avoidance_trajectory(robot_state, next_target) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) def _execute_sand_walking_task(self, robot_state: Dict, env_state: Dict): """执行沙地行走任务""" # 沙地行走控制策略 gait_params = self.energy_optimizer.optimize_gait(robot_state) target_positions = self._generate_sand_walking_trajectory(robot_state, gait_params) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) def _execute_dynamic_avoidance_task(self, robot_state: Dict, env_state: Dict): """执行动态避障任务""" # 动态障碍物避障 robot_pos = self.robot.data.xpos[self.robot.model.body('torso').id] # 预测障碍物位置 for obstacle in env_state['obstacles']: if obstacle['type'] == 'dynamic': # 简单的线性预测 predicted_pos = obstacle['position'] + obstacle.get('velocity', np.zeros(3)) * 0.5 if np.linalg.norm(robot_pos - predicted_pos) < 1.0: # 计算避障方向 avoid_direction = self.path_planner._calculate_avoid_direction(robot_pos, obstacle) target_positions = self._generate_dynamic_avoidance_trajectory(robot_state, avoid_direction) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) return # 如果没有动态障碍物,执行普通行走 self._execute_walking_task(robot_state, env_state) def _execute_interference_response_task(self, robot_state: Dict, env_state: Dict): """执行干扰响应任务""" # 根据干扰类型调整控制策略 interference_level = env_state['interference_level'] if interference_level > 0.5: # 高干扰情况下的稳定控制 target_positions = self._generate_stabilization_trajectory(robot_state) else: # 低干扰情况下的正常行走 target_positions = self._generate_walking_trajectory(robot_state) current_positions = robot_state['joint_positions'] current_velocities = robot_state['joint_velocities'] control_output = self.robot.compute_pid_control( target_positions, current_positions, current_velocities, self.dt ) self.robot.set_joint_targets(control_output) def _generate_walking_trajectory(self, robot_state: Dict) -> np.ndarray: """生成行走轨迹""" # 简化的双足行走轨迹生成 time_phase = self.simulation_time * 2.0 # 行走频率 # 基础关节位置 base_positions = np.zeros(self.robot.model.nu) # 腿部摆动 left_leg_swing = 0.2 * np.sin(time_phase) right_leg_swing = 0.2 * np.sin(time_phase + np.pi) # 设置腿部关节 if self.robot.model.nu >= 12: # 假设至少有12个关节 base_positions[0:3] = [left_leg_swing, 0.0, 0.0] # 左腿 base_positions[3:6] = [right_leg_swing, 0.0, 0.0] # 右腿 base_positions[6:9] = [0.0, 0.0, 0.0] # 左臂 base_positions[9:12] = [0.0, 0.0, 0.0] # 右臂 return base_positions def _generate_slope_walking_trajectory(self, robot_state: Dict, gait_params: Dict) -> np.ndarray: """生成斜坡行走轨迹""" # 调整步态以适应斜坡 base_trajectory = self._generate_walking_trajectory(robot_state) # 增步长和调整重心 step_length = gait_params.get('step_length', 0.3) com_height = gait_params.get('com_height', 0.8) # 调整腿部关节以适应斜坡 slope_adjustment = 0.1 # 斜坡角度调整 base_trajectory[0:3] += slope_adjustment base_trajectory[3:6] += slope_adjustment return base_trajectory def _generate_stair_climbing_trajectory(self, robot_state: Dict) -> np.ndarray: """生成楼梯攀爬轨迹""" # 楼梯攀爬需要更高的抬腿动作 time_phase = self.simulation_time * 1.5 # 较慢的攀爬频率 base_positions = np.zeros(self.robot.model.nu) # 交替抬腿攀爬 left_leg_lift = 0.4 * np.sin(time_phase) right_leg_lift = 0.4 * np.sin(time_phase + np.pi) if self.robot.model.nu >= 12: base_positions[0:3] = [left_leg_lift, 0.0, 0.0] # 左腿 base_positions[3:6] = [right_leg_lift, 0.0, 0.0] # 右腿 base_positions[6:9] = [0.0, 0.0, 0.0] # 左臂 base_positions[9:12] = [0.0, 0.0, 0.0] # 右臂 return base_positions def _generate_avoidance_trajectory(self, robot_state: Dict, target_pos: np.ndarray) -> np.ndarray: """生成避障轨迹""" # 基于目标位置生成避障轨迹 robot_pos = self.robot.data.xpos[self.robot.model.body('torso').id] direction = (target_pos - robot_pos) / np.linalg.norm(target_pos - robot_pos) base_positions = np.zeros(self.robot.model.nu) # 根据方向调整腿部运动 lateral_movement = direction[1] * 0.3 forward_movement = direction[0] * 0.2 if self.robot.model.nu >= 12: base_positions[0:3] = [forward_movement, lateral_movement, 0.0] # 左腿 base_positions[3:6] = [forward_movement, lateral_movement, 0.0] # 右腿 base_positions[6:9] = [0.0, 0.0, 0.0] # 左臂 base_positions[9:12] = [0.0, 0.0, 0.0] # 右臂 return base_positions def _generate_sand_walking_trajectory(self, robot_state: Dict, gait_params: Dict) -> np.ndarray: """生成沙地行走轨迹""" # 沙地行走需要更小的步长和更稳定的重心 base_trajectory = self._generate_walking_trajectory(robot_state) # 减小步长 step_length = gait_params.get('step_length', 0.2) * 0.7 # 沙地减小步长 # 调整重心高度 com_height = gait_params.get('com_height', 0.8) * 0.9 # 沙地降低重心 # 调整腿部关节 base_trajectory[0:3] *= 0.7 # 减小左腿摆动 base_trajectory[3:6] *= 0.7 # 减小右腿摆动 return base_trajectory def _generate_dynamic_avoidance_trajectory(self, robot_state: Dict, avoid_direction: np.ndarray) -> np.ndarray: """生成动态避障轨迹""" # 基于避障方向生成轨迹 base_positions = np.zeros(self.robot.model.nu) lateral_movement = avoid_direction[1] * 0.4 forward_movement = avoid_direction[0] * 0.3 if self.robot.model.nu >= 12: base_positions[0:3] = [forward_movement, lateral_movement, 0.0] # 左腿 base_positions[3:6] = [forward_movement, lateral_movement, 0.0] # 右腿 base_positions[6:9] = [0.0, 0.0, 0.0] # 左臂 base_positions[9:12] = [0.0, 0.0, 0.0] # 右臂 return base_positions def _generate_stabilization_trajectory(self, robot_state: Dict) -> np.ndarray: """生成稳定化轨迹""" # 高干扰情况下的稳定控制 base_positions = np.zeros(self.robot.model.nu) # 保持当前姿态,减小运动幅度 current_positions = robot_state['joint_positions'] if len(current_positions) >= self.robot.model.nu: base_positions = current_positions[:self.robot.model.nu] * 0.8 # 减小运动幅度 return base_positions def update_performance_metrics(self): """更新性能指标""" robot_state = self.get_robot_state() env_state = self.get_environment_state() # 任务完成率 terrain_progress = (self.scenario_state['current_terrain_index'] / len(self.scenario_state['terrain_sequence'])) self.performance_metrics.task_completion_rate = terrain_progress # 能耗效率 energy_efficiency = 1.0 - (robot_state['energy_consumption'] / 1000.0) self.performance_metrics.energy_efficiency = max(0.0, energy_efficiency) # 稳定性评分 com_height = robot_state['com_height'] com_velocity = robot_state['com_velocity'] stability_score = 1.0 - (com_velocity / 2.0) # 速度越小越稳定 self.performance_metrics.stability_score = max(0.0, stability_score) # 响应时间 response_time = 1.0 - (self.simulation_time / self.max_simulation_time) self.performance_metrics.response_time = max(0.0, response_time) # 障碍物避障率 obstacle_avoidance_rate = 1.0 for obstacle in env_state['obstacles']: if obstacle['distance'] < 0.5: # 距离太近 obstacle_avoidance_rate = 0.5 self.performance_metrics.obstacle_avoidance_rate = obstacle_avoidance_rate # 地形适应评分 terrain_adaptation = 1.0 if env_state['terrain_type'] in [TerrainType.SLOPE, TerrainType.STAIRS, TerrainType.SAND]: terrain_adaptation = 0.8 self.performance_metrics.terrain_adaptation_score = terrain_adaptation def record_performance_data(self): """记录性能数据""" self.performance_data['time'].append(self.simulation_time) self.performance_data['energy'].append(self.robot.energy_consumption) self.performance_data['stability'].append(self.performance_metrics.stability_score) self.performance_data['task_completion'].append(self.performance_metrics.task_completion_rate) def run_simulation(self): """运行仿真""" logger.info("🚀 开始人形机器人仿真比赛") # 载机器人模型 model_path = "models/humanoid.xml" # 需要创建这个模型文件 if not self.robot.load_model(model_path): logger.error("❌ 无法载机器人模型,使用默认模型") # 这里可以创建一个简单的默认模型 # 设置场景 self.setup_scenario("comprehensive") # 仿真主循环 while self.simulation_time < self.max_simulation_time: # 更新场景状态 self.update_scenario_state() # 获取当前状态 robot_state = self.get_robot_state() env_state = self.get_environment_state() # 智能决策 current_task = self.decision_system.make_decision(robot_state, env_state) self.task_history.append(current_task) # 执行任务 self.execute_task(current_task) # 更新性能指标 self.update_performance_metrics() self.record_performance_data() # 仿真步进 self.robot.step(self.dt) self.simulation_time += self.dt # 输出进度 if int(self.simulation_time) % 10 == 0 and self.simulation_time > 0: logger.info(f"⏱️ 仿真时间: {self.simulation_time:.1f}s") logger.info(f"📊 任务完成率: {self.performance_metrics.task_completion_rate:.2f}") logger.info(f"⚡ 能耗效率: {self.performance_metrics.energy_efficiency:.2f}") logger.info(f"🛡️ 稳定性: {self.performance_metrics.stability_score:.2f}") # 仿真结束 logger.info("✅ 仿真完成") self._print_final_results() def _print_final_results(self): """打印最终结果""" print("\n" + "=" * 50) print("🏆 人形机器人仿真比赛结果") print("=" * 50) print(f"📊 任务完成率: {self.performance_metrics.task_completion_rate:.2f}") print(f"⚡ 能耗效率: {self.performance_metrics.energy_efficiency:.2f}") print(f"🛡️ 稳定性评分: {self.performance_metrics.stability_score:.2f}") print(f"⚡ 响应时间: {self.performance_metrics.response_time:.2f}") print(f"🚧 障碍物避障率: {self.performance_metrics.obstacle_avoidance_rate:.2f}") print(f"🏔️ 地形适应评分: {self.performance_metrics.terrain_adaptation_score:.2f}") # 计算总分 total_score = ( self.performance_metrics.task_completion_rate * 0.3 + self.performance_metrics.energy_efficiency * 0.2 + self.performance_metrics.stability_score * 0.2 + self.performance_metrics.response_time * 0.1 + self.performance_metrics.obstacle_avoidance_rate * 0.1 + self.performance_metrics.terrain_adaptation_score * 0.1 ) print(f"\n🏅 总分: {total_score:.2f}") if total_score >= 0.8: print("🥇 优秀表现!") elif total_score >= 0.6: print("🥈 良好表现!") elif total_score >= 0.4: print("🥉 合格表现!") else: print("📈 需要改进") print("=" * 50) def main(): """主函数""" parser = argparse.ArgumentParser(description="人形机器人仿真比赛系统") parser.add_argument("--scenario", type=str, default="comprehensive", choices=["comprehensive", "basic_tasks", "advanced_tasks"], help="仿真场景类型") parser.add_argument("--duration", type=float, default=60.0, help="仿真持续时间(秒)") args = parser.parse_args() # 创建仿真实例 simulation = CompetitionSimulation() simulation.max_simulation_time = args.duration # 设置场景 simulation.setup_scenario(args.scenario) # 运行仿真 simulation.run_simulation() if __name__ == "__main__": main()修正代码
07-31
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值