揭秘Laravel 10事件广播驱动:如何选择最适合你项目的广播方案?

第一章:Laravel 10事件广播驱动概述

Laravel 10 提供了强大的事件广播系统,允许开发者将服务器端的事件实时推送到客户端。该机制通过抽象化的广播驱动实现,使应用能够灵活对接不同的消息代理服务,如 Pusher、Redis、Socket.io 等。

广播驱动的核心作用

事件广播驱动负责将 Laravel 触发的事件序列化,并通过指定的通道发送到前端。每个驱动都实现了统一的接口,确保切换底层服务时无需修改业务逻辑。
  • Pusher Channels:适用于快速上线的商业解决方案,支持 WebSocket 实时通信
  • Redis + Socket.IO:适合自建服务,结合 Laravel 广播与 Node.js 构建高并发系统
  • Null 驱动:用于测试环境,不实际发送任何消息

配置广播驱动

config/broadcasting.php 文件中可设置默认驱动:
// config/broadcasting.php
'default' => env('BROADCAST_DRIVER', 'null'),

'connections' => [
    'pusher' => [
        'driver' => 'pusher',
        'key' => env('PUSHER_APP_KEY'),
        'secret' => env('PUSHER_APP_SECRET'),
        'app_id' => env('PUSHER_APP_ID'),
        'options' => [
            'host' => env('PUSHER_HOST') ?: 'api-pusher.com',
            'port' => env('PUSHER_PORT') ?: 443,
            'scheme' => env('PUSHER_SCHEME') ?: 'https',
            'encrypted' => true,
        ],
    ],
],

可用驱动对比

驱动类型传输协议适用场景
PusherWebSocket中小型项目,快速集成
Redis + Laravel Echo ServerWebSocket大型自托管系统
Null本地开发与单元测试
graph LR A[Laravel Application] --> B{Broadcast Manager} B --> C[Pusher Driver] B --> D[Redis Driver] B --> E[Null Driver] C --> F[Pusher API] D --> G[Node.js Echo Server]

第二章:深入理解Laravel广播系统核心机制

2.1 广播系统架构与工作原理剖析

广播系统的核心在于实现消息从单一源点向多个接收端的高效同步。其典型架构包含消息发布者、广播中心节点和分布式订阅者三大部分,通过异步通信机制保障系统的可扩展性与低延迟。
数据分发拓扑结构
常见的拓扑包括星型、树型与混合型。树型结构能有效减轻中心节点负载,适合大规模部署:
  • 星型:所有节点直连中心,控制简单但扩展性差
  • 树型:分层转发,降低带宽压力
  • 混合型:结合两者优势,适应复杂网络环境
消息传播机制示例
func broadcastMessage(msg []byte, subscribers []Connection) {
    for _, conn := range subscribers {
        go func(c Connection) {
            c.Write(msg) // 异步写入避免阻塞
        }(conn)
    }
}
上述代码展示了基本的广播逻辑:将消息并行发送至所有连接的客户端。使用 goroutine 实现非阻塞发送,确保高吞吐量。参数 subscribers 维护当前活跃的订阅连接列表,需配合心跳机制动态更新。

2.2 事件、广播与队列的协同运作流程

在现代分布式系统中,事件驱动架构通过解耦组件提升系统的可扩展性与响应能力。事件发生后,系统将其发布至消息队列,由广播机制通知多个订阅者进行异步处理。
事件触发与队列缓冲
当核心业务逻辑触发事件(如订单创建),该事件被序列化并推入消息队列(如RabbitMQ或Kafka),实现流量削峰与故障隔离。
// 示例:将订单事件推入队列
func PublishOrderEvent(orderID string) {
    event := Event{Type: "order.created", Payload: orderID}
    data, _ := json.Marshal(event)
    queue.Publish("events", data) // 发布到名为 events 的队列
}
上述代码将订单创建事件写入消息队列。queue.Publish 非阻塞调用,确保主流程快速响应。
广播分发与消费者处理
消息队列通过广播模式将事件投递给多个消费者组,各服务根据自身职责执行对应逻辑,如发送邮件、更新库存等。
组件职责
事件生产者生成并发布事件
消息队列缓冲与持久化事件
广播中心向多个订阅者投递事件
事件消费者接收并处理业务逻辑

2.3 广播频道类型详解:公共、私有与存在频道

在实时通信系统中,广播频道根据访问权限和用途可分为三类:公共频道、私有频道和存在频道。
公共频道
任何客户端均可订阅和接收消息,适用于公告类信息传播。无需身份验证即可接入。
私有频道
仅授权用户可订阅,通常用于敏感数据传输。订阅请求需携带有效认证令牌:

const channel = pusher.subscribe('private-user-updates');
channel.bind('update', (data) => {
  console.log('收到私有消息:', data);
});
上述代码通过 Pusher 客户端订阅名为 private-user-updates 的私有频道,服务端会验证订阅请求的 JWT 签名。
存在频道
在私有频道基础上扩展成员状态管理功能,支持获取当前在线成员列表。
  • 成员加入时触发 pusher:member_added
  • 支持实时显示“正在输入”等 Presence 状态

2.4 驱动抽象层:BroadcastManager与连接解析

广播管理的核心职责
BroadcastManager 是驱动抽象层的关键组件,负责统一调度跨节点消息广播。它屏蔽底层通信差异,为上层提供一致的事件分发接口。
连接解析机制
系统通过连接解析器动态识别目标节点的通信协议与地址信息。解析过程支持配置化规则,适配多种网络环境。
func (bm *BroadcastManager) Broadcast(event Event) error {
    nodes := bm.Resolver.Discover() // 获取活跃节点
    for _, node := range nodes {
        go bm.send(event, node.Address) // 异步发送
    }
    return nil
}
该方法遍历解析出的节点列表,异步推送事件。Resolver 实现了服务发现逻辑,Address 字段封装了IP、端口与协议类型。
字段说明
Resolver负责节点发现与连接信息解析
send()基于连接协议选择具体传输方式

2.5 实战:从零构建一个可广播事件

在现代前端架构中,事件广播机制是解耦组件通信的核心手段之一。本节将手动生成一个轻量级事件总线,支持订阅、发布与取消订阅。
事件总线结构设计
核心功能包括:
  • on(event, callback):绑定事件监听
  • emit(event, data):触发事件并传递数据
  • off(event, callback):移除指定监听
代码实现

class EventBus {
  constructor() {
    this.events = {};
  }
  on(event, callback) {
    if (!this.events[event]) this.events[event] = [];
    this.events[event].push(callback);
  }
  emit(event, data) {
    if (this.events[event]) {
      this.events[event].forEach(fn => fn(data));
    }
  }
  off(event, callback) {
    if (this.events[event]) {
      this.events[event] = this.events[event].filter(fn => fn !== callback);
    }
  }
}
上述代码中,events 对象以事件名为键,存储回调函数数组。emit 方法遍历执行所有注册的回调,实现广播效果。

第三章:主流广播驱动对比与选型策略

3.1 Redis驱动:高性能场景下的理想选择

在高并发、低延迟要求严苛的应用场景中,Redis 驱动凭借其内存存储与非阻塞 I/O 特性,成为首选数据访问方案。其核心优势在于极高的读写吞吐能力,适用于缓存、会话存储和实时计数等业务。
连接初始化示例

client := redis.NewClient(&redis.Options{
    Addr:     "localhost:6379",
    Password: "", 
    DB:       0,
    PoolSize: 100, // 连接池大小
})
该代码创建一个 Redis 客户端,PoolSize 设置为 100 可有效支撑高并发请求,避免频繁建立连接带来的开销。
性能关键参数对比
参数默认值推荐值(高性能场景)
MaxRetries35
ReadTimeout3s500ms

3.2 Pusher驱动:开箱即用的云端解决方案

快速集成实时通信
Pusher 提供了无需搭建后端服务即可实现 WebSocket 通信的云端方案。开发者只需引入 SDK 并配置认证密钥,即可在 Laravel 或其他框架中启用广播功能。

Broadcast::routes(['middleware' => ['auth:api']]);
// config/broadcasting.php
'connections' => [
    'pusher' => [
        'driver' => 'pusher',
        'key' => env('PUSHER_APP_KEY'),
        'secret' => env('PUSHER_APP_SECRET'),
        'app_id' => env('PUSHER_APP_ID'),
        'options' => [
            'cluster' => env('PUSHER_APP_CLUSTER'),
            'host' => null,
            'port' => 6001,
            'scheme' => 'https',
        ],
    ],
]
上述配置定义了与 Pusher 服务器的安全连接参数,其中 cluster 指定地理区域以降低延迟,scheme 强制使用 HTTPS 保障传输安全。
核心优势一览
  • 免运维:无需管理 WebSocket 服务器实例
  • 弹性扩展:自动应对连接数波动
  • 跨平台支持:提供 JavaScript、iOS、Android 等多端 SDK

3.3 Soketi驱动:轻量级开源替代方案评估

架构与核心优势
Soketi作为Pusher协议兼容的开源WebSocket服务器,专为Laravel生态设计,以轻量、高性能著称。其基于Node.js构建,支持横向扩展,适用于中低并发实时应用。
部署配置示例
{
  "appManager": {
    "driver": "soketi",
    "options": {
    "host": "127.0.0.1",
    "port": 6001
  }
}
}
该配置定义Soketi服务地址与端口,host指定监听IP,port为WebSocket通信端口,需与前端Pusher客户端一致。
性能对比
指标SoketiPusher商业版
连接延迟≈80ms≈60ms
单节点容量约5K连接无限制

第四章:高可用广播方案设计与优化实践

4.1 基于Redis Cluster的广播扩展方案

在分布式系统中,利用 Redis Cluster 实现高效的消息广播需解决跨节点通信问题。传统单实例发布/订阅机制无法直接应用于集群环境,需引入代理协调策略。
数据同步机制
通过在每个 Redis 节点部署监听代理,实现消息的跨槽位传播。当某节点收到发布请求时,代理将消息转发至其他主节点,确保所有客户端均可接收到广播内容。
// 伪代码:跨节点消息转发
func onPublish(channel, message string) {
    for _, node := range cluster.Nodes {
        node.Client.Publish("_broadcast:" + channel, message)
    }
}
上述逻辑中,每个节点监听全局广播通道 _broadcast: 前缀,避免与业务频道冲突,同时保证消息可达性。
性能优化策略
  • 采用异步非阻塞 I/O 减少转发延迟
  • 对高频消息进行合并与节流控制
  • 使用连接池复用节点间通信链路

4.2 消息确认与重试机制保障投递可靠性

在分布式消息系统中,确保消息可靠投递的核心在于消息确认与重试机制的协同工作。消费者处理消息后需显式发送确认(ACK),Broker 接收到 ACK 后才安全删除消息。
确认模式与失败处理
常见的确认模式包括自动确认与手动确认。手动确认更安全,允许在处理失败时拒绝消息并触发重试。
  • 消息未确认:Broker 将其重新入队或投递给其他消费者
  • 处理异常:通过 NACK 通知 Broker 进行重试
重试策略配置示例
rabbitMQChannel.Qos(1, 0, false) // 一次只处理一条
delivery, _ := channel.Consume(queueName, "", false, false, false, false, nil)
go func() {
    for d := range delivery {
        if err := process(d.Body); err != nil {
            d.Nack(false, true) // 重新入队
        } else {
            d.Ack(false)
        }
    }
}()
上述代码设置预取数为1,防止消息积压;处理失败时调用 Nack 触发重试,保障消息不丢失。

4.3 客户端订阅状态管理与异常处理

在实时通信系统中,客户端的订阅状态必须被精确追踪,以确保消息的准确投递和资源的合理释放。当网络波动或服务中断时,若未妥善处理订阅状态,将导致重复订阅或消息丢失。
订阅状态生命周期
客户端连接建立后,需向服务端注册订阅主题,并维护本地状态机记录当前订阅情况。典型状态包括:未订阅、订阅中、已订阅、断开中。
// 示例:Go 中的订阅状态结构
type Subscription struct {
    Topic   string
    Active  bool
    Retry   int
    LastSeen time.Time
}
该结构体用于跟踪每个主题的订阅状态,Active 表示当前是否有效,Retry 控制重连次数,防止雪崩。
异常处理策略
采用指数退避重连机制应对网络抖动,同时通过心跳包检测连接健康度。服务端应支持会话保持,允许客户端恢复后快速重建订阅。
  • 连接断开时触发 onDisconnect 事件
  • 清理无效订阅句柄,避免内存泄漏
  • 使用唯一 Client ID 关联会话上下文

4.4 性能压测与延迟优化技巧

基准压测方案设计
使用 wrk 进行高并发基准测试,命令如下:
wrk -t12 -c400 -d30s http://localhost:8080/api/v1/users
该命令启动12个线程,维持400个长连接,持续压测30秒。参数 -t 控制线程数,-c 设置并发连接数,-d 定义测试时长。通过调整这些参数可模拟不同负载场景。
关键延迟优化策略
  • 启用 GOMAXPROCS 匹配 CPU 核心数,提升调度效率
  • 使用连接池管理数据库连接,避免频繁建连开销
  • 引入 Redis 缓存热点数据,降低后端负载
性能指标对比
优化项平均延迟(ms)QPS
原始版本482100
优化后195600

第五章:未来趋势与生态演进方向

云原生架构的深度整合
现代应用正加速向云原生模式迁移,Kubernetes 已成为容器编排的事实标准。企业通过声明式配置实现基础设施即代码(IaC),提升部署一致性与可维护性。
  • 服务网格(如 Istio)增强微服务间的安全通信
  • OpenTelemetry 统一监控指标、日志与追踪数据
  • Serverless 框架(如 Knative)支持弹性伸缩函数计算
边缘计算驱动的分布式架构
随着 IoT 设备激增,数据处理正从中心云向边缘节点下沉。例如,在智能制造场景中,工厂本地网关实时分析传感器数据,降低延迟并减少带宽消耗。

// 示例:边缘节点上的轻量级数据过滤逻辑
func filterSensorData(data *SensorReading) bool {
    if data.Temperature > 85.0 { // 高温告警
        sendToCloud(data)
        return true
    }
    return false // 本地丢弃正常数据
}
AI 原生开发范式的兴起
AI 模型逐步融入核心业务流程。开发者使用 MLOps 工具链(如 Kubeflow、MLflow)实现模型训练、版本控制与在线推理的一体化部署。
技术方向代表工具应用场景
自动化运维Prometheus + AIOps异常检测与根因分析
低代码集成Retool + LangChain快速构建智能前端界面
开源生态的协同创新机制
CNCF、Apache 等基金会推动跨组织协作。项目如 etcd、gRPC 和 Fluentd 被广泛集成于不同平台,形成高度互操作的技术栈。社区贡献者通过 GitHub Actions 实现自动化测试与安全扫描,保障代码质量。
[ WARN] [1762076634.614271498]: Skipping virtual joint 'base_link' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1762076634.614330146]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1762076635.805948068]: Ready to take commands for planning group manipulator_i5. [INFO] [1762076635.806458]: AUBO安全控制器初始化完成 [INFO] [1762076635.813104]: 使用保守参数: 速度=0.1, 加速度=0.1 roslaunch aubo_i5_moveit_config moveit_planning_execution.launch sim:=false robot_ip:=192.168.192.10 ... logging to /root/.ros/log/69be2d1c-b7d0-11f0-a111-2c9464039e34/roslaunch-tian-28357.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. xacro: in-order processing became default in ROS Melodic. You can drop the option. started roslaunch server http://tian:34027/ SUMMARY ======== PARAMETERS * /aubo_driver/server_host: 192.168.192.10 * /controller_joint_names: ['shoulder_joint'... * /move_group/allow_trajectory_execution: True * /move_group/capabilities: * /move_group/controller_list: [{'default': True... * /move_group/disable_capabilities: * /move_group/jiggle_fraction: 0.05 * /move_group/manipulator_i5/default_planner_config: RRTConnect * /move_group/manipulator_i5/longest_valid_segment_fraction: 0.005 * /move_group/manipulator_i5/planner_configs: ['SBL', 'EST', 'L... * /move_group/manipulator_i5/projection_evaluator: joints(shoulder_j... * /move_group/max_range: 5.0 * /move_group/max_safe_path_cost: 1 * /move_group/moveit_controller_manager: moveit_simple_con... * /move_group/moveit_manage_controllers: True * /move_group/octomap_resolution: 0.025 * /move_group/planner_configs/BFMT/balanced: 0 * /move_group/planner_configs/BFMT/cache_cc: 1 * /move_group/planner_configs/BFMT/extended_fmt: 1 * /move_group/planner_configs/BFMT/heuristics: 1 * /move_group/planner_configs/BFMT/nearest_k: 1 * /move_group/planner_configs/BFMT/num_samples: 1000 * /move_group/planner_configs/BFMT/optimality: 1 * /move_group/planner_configs/BFMT/radius_multiplier: 1.0 * /move_group/planner_configs/BFMT/type: geometric::BFMT * /move_group/planner_configs/BKPIECE/border_fraction: 0.9 * /move_group/planner_configs/BKPIECE/failed_expansion_score_factor: 0.5 * /move_group/planner_configs/BKPIECE/min_valid_path_fraction: 0.5 * /move_group/planner_configs/BKPIECE/range: 0.0 * /move_group/planner_configs/BKPIECE/type: geometric::BKPIECE * /move_group/planner_configs/BiEST/range: 0.0 * /move_group/planner_configs/BiEST/type: geometric::BiEST * /move_group/planner_configs/BiTRRT/cost_threshold: 1e300 * /move_group/planner_configs/BiTRRT/frountier_node_ratio: 0.1 * /move_group/planner_configs/BiTRRT/frountier_threshold: 0.0 * /move_group/planner_configs/BiTRRT/init_temperature: 100 * /move_group/planner_configs/BiTRRT/range: 0.0 * /move_group/planner_configs/BiTRRT/temp_change_factor: 0.1 * /move_group/planner_configs/BiTRRT/type: geometric::BiTRRT * /move_group/planner_configs/EST/goal_bias: 0.05 * /move_group/planner_configs/EST/range: 0.0 * /move_group/planner_configs/EST/type: geometric::EST * /move_group/planner_configs/FMT/cache_cc: 1 * /move_group/planner_configs/FMT/extended_fmt: 1 * /move_group/planner_configs/FMT/heuristics: 0 * /move_group/planner_configs/FMT/nearest_k: 1 * /move_group/planner_configs/FMT/num_samples: 1000 * /move_group/planner_configs/FMT/radius_multiplier: 1.1 * /move_group/planner_configs/FMT/type: geometric::FMT * /move_group/planner_configs/KPIECE/border_fraction: 0.9 * /move_group/planner_configs/KPIECE/failed_expansion_score_factor: 0.5 * /move_group/planner_configs/KPIECE/goal_bias: 0.05 * /move_group/planner_configs/KPIECE/min_valid_path_fraction: 0.5 * /move_group/planner_configs/KPIECE/range: 0.0 * /move_group/planner_configs/KPIECE/type: geometric::KPIECE * /move_group/planner_configs/LBKPIECE/border_fraction: 0.9 * /move_group/planner_configs/LBKPIECE/min_valid_path_fraction: 0.5 * /move_group/planner_configs/LBKPIECE/range: 0.0 * /move_group/planner_configs/LBKPIECE/type: geometric::LBKPIECE * /move_group/planner_configs/LBTRRT/epsilon: 0.4 * /move_group/planner_configs/LBTRRT/goal_bias: 0.05 * /move_group/planner_configs/LBTRRT/range: 0.0 * /move_group/planner_configs/LBTRRT/type: geometric::LBTRRT * /move_group/planner_configs/LazyPRM/range: 0.0 * /move_group/planner_configs/LazyPRM/type: geometric::LazyPRM * /move_group/planner_configs/LazyPRMstar/type: geometric::LazyPR... * /move_group/planner_configs/PDST/type: geometric::PDST * /move_group/planner_configs/PRM/max_nearest_neighbors: 10 * /move_group/planner_configs/PRM/type: geometric::PRM * /move_group/planner_configs/PRMstar/type: geometric::PRMstar * /move_group/planner_configs/ProjEST/goal_bias: 0.05 * /move_group/planner_configs/ProjEST/range: 0.0 * /move_group/planner_configs/ProjEST/type: geometric::ProjEST * /move_group/planner_configs/RRT/goal_bias: 0.05 * /move_group/planner_configs/RRT/range: 0.0 * /move_group/planner_configs/RRT/type: geometric::RRT * /move_group/planner_configs/RRTConnect/range: 0.0 * /move_group/planner_configs/RRTConnect/type: geometric::RRTCon... * /move_group/planner_configs/RRTstar/delay_collision_checking: 1 * /move_group/planner_configs/RRTstar/goal_bias: 0.05 * /move_group/planner_configs/RRTstar/range: 0.0 * /move_group/planner_configs/RRTstar/type: geometric::RRTstar * /move_group/planner_configs/SBL/range: 0.0 * /move_group/planner_configs/SBL/type: geometric::SBL * /move_group/planner_configs/SPARS/dense_delta_fraction: 0.001 * /move_group/planner_configs/SPARS/max_failures: 1000 * /move_group/planner_configs/SPARS/sparse_delta_fraction: 0.25 * /move_group/planner_configs/SPARS/stretch_factor: 3.0 * /move_group/planner_configs/SPARS/type: geometric::SPARS * /move_group/planner_configs/SPARStwo/dense_delta_fraction: 0.001 * /move_group/planner_configs/SPARStwo/max_failures: 5000 * /move_group/planner_configs/SPARStwo/sparse_delta_fraction: 0.25 * /move_group/planner_configs/SPARStwo/stretch_factor: 3.0 * /move_group/planner_configs/SPARStwo/type: geometric::SPARStwo * /move_group/planner_configs/STRIDE/degree: 16 * /move_group/planner_configs/STRIDE/estimated_dimension: 0.0 * /move_group/planner_configs/STRIDE/goal_bias: 0.05 * /move_group/planner_configs/STRIDE/max_degree: 18 * /move_group/planner_configs/STRIDE/max_pts_per_leaf: 6 * /move_group/planner_configs/STRIDE/min_degree: 12 * /move_group/planner_configs/STRIDE/min_valid_path_fraction: 0.2 * /move_group/planner_configs/STRIDE/range: 0.0 * /move_group/planner_configs/STRIDE/type: geometric::STRIDE * /move_group/planner_configs/STRIDE/use_projected_distance: 0 * /move_group/planner_configs/TRRT/frountierNodeRatio: 0.1 * /move_group/planner_configs/TRRT/frountier_threshold: 0.0 * /move_group/planner_configs/TRRT/goal_bias: 0.05 * /move_group/planner_configs/TRRT/init_temperature: 10e-6 * /move_group/planner_configs/TRRT/k_constant: 0.0 * /move_group/planner_configs/TRRT/max_states_failed: 10 * /move_group/planner_configs/TRRT/min_temperature: 10e-10 * /move_group/planner_configs/TRRT/range: 0.0 * /move_group/planner_configs/TRRT/temp_change_factor: 2.0 * /move_group/planner_configs/TRRT/type: geometric::TRRT * /move_group/planning_plugin: ompl_interface/OM... * /move_group/planning_scene_monitor/publish_geometry_updates: True * /move_group/planning_scene_monitor/publish_planning_scene: True * /move_group/planning_scene_monitor/publish_state_updates: True * /move_group/planning_scene_monitor/publish_transforms_updates: True * /move_group/request_adapters: industrial_trajec... * /move_group/sample_duration: 0.005 * /move_group/sensors: [{'point_subsampl... * /move_group/start_state_max_bounds_error: 0.1 * /move_group/trajectory_execution/allowed_execution_duration_scaling: 4.0 * /move_group/trajectory_execution/allowed_goal_duration_margin: 0.5 * /move_group/trajectory_execution/allowed_start_tolerance: 0.01 * /move_group/trajectory_execution/execution_duration_monitoring: False * /robot_description: <?xml version="1.... * /robot_description_kinematics/manipulator_i5/kinematics_solver: kdl_kinematics_pl... * /robot_description_kinematics/manipulator_i5/kinematics_solver_search_resolution: 0.005 * /robot_description_kinematics/manipulator_i5/kinematics_solver_timeout: 0.005 * /robot_description_planning/joint_limits/foreArm_joint/has_acceleration_limits: False * /robot_description_planning/joint_limits/foreArm_joint/has_velocity_limits: True * /robot_description_planning/joint_limits/foreArm_joint/max_acceleration: 30 * /robot_description_planning/joint_limits/foreArm_joint/max_velocity: 3.14 * /robot_description_planning/joint_limits/shoulder_joint/has_acceleration_limits: False * /robot_description_planning/joint_limits/shoulder_joint/has_velocity_limits: True * /robot_description_planning/joint_limits/shoulder_joint/max_acceleration: 30 * /robot_description_planning/joint_limits/shoulder_joint/max_velocity: 3.14 * /robot_description_planning/joint_limits/upperArm_joint/has_acceleration_limits: False * /robot_description_planning/joint_limits/upperArm_joint/has_velocity_limits: True * /robot_description_planning/joint_limits/upperArm_joint/max_acceleration: 30 * /robot_description_planning/joint_limits/upperArm_joint/max_velocity: 3.14 * /robot_description_planning/joint_limits/wrist1_joint/has_acceleration_limits: False * /robot_description_planning/joint_limits/wrist1_joint/has_velocity_limits: True * /robot_description_planning/joint_limits/wrist1_joint/max_acceleration: 25 * /robot_description_planning/joint_limits/wrist1_joint/max_velocity: 2.6 * /robot_description_planning/joint_limits/wrist2_joint/has_acceleration_limits: False * /robot_description_planning/joint_limits/wrist2_joint/has_velocity_limits: True * /robot_description_planning/joint_limits/wrist2_joint/max_acceleration: 25 * /robot_description_planning/joint_limits/wrist2_joint/max_velocity: 2.6 * /robot_description_planning/joint_limits/wrist3_joint/has_acceleration_limits: False * /robot_description_planning/joint_limits/wrist3_joint/has_velocity_limits: True * /robot_description_planning/joint_limits/wrist3_joint/max_acceleration: 25 * /robot_description_planning/joint_limits/wrist3_joint/max_velocity: 2.6 * /robot_description_semantic: <?xml version="1.... * /robot_name: aubo_i5 * /rosdistro: melodic * /rosversion: 1.14.13 * /rviz_tian_28357_5613264337159122920/manipulator_i5/kinematics_solver: kdl_kinematics_pl... * /rviz_tian_28357_5613264337159122920/manipulator_i5/kinematics_solver_search_resolution: 0.005 * /rviz_tian_28357_5613264337159122920/manipulator_i5/kinematics_solver_timeout: 0.005 NODES / aubo_driver (aubo_driver/aubo_driver) aubo_joint_trajectory_action (aubo_controller/aubo_joint_trajectory_action) aubo_robot_simulator (aubo_controller/aubo_robot_simulator) move_group (moveit_ros_move_group/move_group) robot_state_publisher (robot_state_publisher/robot_state_publisher) rviz_tian_28357_5613264337159122920 (rviz/rviz) auto-starting new master process[master]: started with pid [28371] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 69be2d1c-b7d0-11f0-a111-2c9464039e34 process[rosout-1]: started with pid [28382] started core service [/rosout] process[aubo_robot_simulator-2]: started with pid [28389] process[aubo_joint_trajectory_action-3]: started with pid [28390] process[aubo_driver-4]: started with pid [28391] process[robot_state_publisher-5]: started with pid [28393] [ INFO] [1762076620.975289494]: Adding shoulder_joint to list parameter [ INFO] [1762076620.976437205]: Adding upperArm_joint to list parameter [ INFO] [1762076620.976450447]: Adding foreArm_joint to list parameter [ INFO] [1762076620.976461219]: Adding wrist1_joint to list parameter [ INFO] [1762076620.976468429]: Adding wrist2_joint to list parameter [ INFO] [1762076620.976475474]: Adding wrist3_joint to list parameter [ INFO] [1762076620.976499775]: Found user-specified joint names in 'controller_joint_names': [shoulder_joint, upperArm_joint, foreArm_joint, wrist1_joint, wrist2_joint, wrist3_joint] [ INFO] [1762076620.976511379]: Filtered joint names to 6 joints process[move_group-6]: started with pid [28402] process[rviz_tian_28357_5613264337159122920-7]: started with pid [28412] log4cplus:ERROR could not open file ./config/tracelog.properties log4cplus:ERROR No appenders could be found for logger (ourrobottrace). log4cplus:ERROR Please initialize the log4cplus system properly. [ INFO] [1762076621.017125306]: Loading robot model 'aubo_i5'... [ WARN] [1762076621.017989247]: Skipping virtual joint 'base_link' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1762076621.018004978]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1762076621.068145876]: rviz version 1.13.30 [ INFO] [1762076621.068293956]: compiled against Qt version 5.9.5 [ INFO] [1762076621.068302158]: compiled against OGRE version 1.9.0 (Ghadamon) [ INFO] [1762076621.071714956]: Forcing OpenGl version 0. [INFO] [1762076621.293396]: Setting publish rate (hz) based on parameter: 50.000000 [INFO] [1762076621.294459]: Simulating manipulator with 6 joints: shoulder_joint, upperArm_joint, foreArm_joint, wrist1_joint, wrist2_joint, wrist3_joint [INFO] [1762076621.295511]: Setting motion update rate (hz): 200.000000 [WARN] [1762076621.296947]: Invalid initial_joint_state parameter, defaulting to all-zeros [INFO] [1762076621.297546]: Using initial joint state: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] [INFO] [1762076621.302005]: Starting motion worker in motion controller simulator [INFO] [1762076621.303133]: The velocity scale factor of the trajetory is: 1.000000 [INFO] [1762076621.306835]: Creating joint trajectory subscriber [INFO] [1762076621.308616]: Enable Switch [INFO] [1762076621.354461]: Clean up init [ INFO] [1762076621.490932917]: Stereo is NOT SUPPORTED [ INFO] [1762076621.490982853]: OpenGL device: NVIDIA GeForce GTX 1050 Ti/PCIe/SSE2 [ INFO] [1762076621.490998617]: OpenGl version: 4.6 (GLSL 4.6). [ INFO] [1762076621.527283296]: Publishing maintained planning scene on 'monitored_planning_scene' [ INFO] [1762076621.528627766]: MoveGroup debug mode is OFF Starting planning scene monitors... [ INFO] [1762076621.528645797]: Starting planning scene monitor [ INFO] [1762076621.529857849]: Listening to '/planning_scene' [ INFO] [1762076621.529872699]: Starting world geometry update monitor for collision objects, attached objects, octomap updates. [ INFO] [1762076621.530897643]: Listening to '/collision_object' [ INFO] [1762076621.531963313]: Listening to '/planning_scene_world' for planning scene world geometry [ INFO] [1762076622.535972708]: Listening to '/head_mount_kinect/depth_registered/points' using message filter with target frame 'world ' [ INFO] [1762076622.538405886]: Listening to '/attached_collision_object' for attached collision objects Planning scene monitors started. [ INFO] [1762076622.554456326]: Initializing OMPL interface using ROS parameters [ INFO] [1762076622.565566122]: Using planning interface 'OMPL' [ INFO] [1762076622.567477284]: Constructing N point filter [ INFO] [1762076622.568748991]: Param 'default_workspace_bounds' was not set. Using default value: 10 [ INFO] [1762076622.569256661]: Param 'start_state_max_bounds_error' was set to 0.1 [ INFO] [1762076622.569523339]: Param 'start_state_max_dt' was not set. Using default value: 0.5 [ INFO] [1762076622.570007206]: Param 'start_state_max_dt' was not set. Using default value: 0.5 [ INFO] [1762076622.570270603]: Param 'jiggle_fraction' was set to 0.05 [ INFO] [1762076622.570527226]: Param 'max_sampling_attempts' was not set. Using default value: 100 [ INFO] [1762076622.570586021]: Using planning request adapter 'Trajectory filter 'UniformSampleFilter' of type 'UniformSampleFilter'' [ INFO] [1762076622.570597617]: Using planning request adapter 'Add Time Parameterization' [ INFO] [1762076622.570606717]: Using planning request adapter 'Fix Workspace Bounds' [ INFO] [1762076622.570622499]: Using planning request adapter 'Fix Start State Bounds' [ INFO] [1762076622.570642885]: Using planning request adapter 'Fix Start State In Collision' [ INFO] [1762076622.570662345]: Using planning request adapter 'Fix Start State Path Constraints' [ INFO] [1762076622.867106380]: Added FollowJointTrajectory controller for aubo_i5_controller [ INFO] [1762076622.867371382]: Returned 1 controllers in list [ INFO] [1762076622.900353684]: Trajectory execution is managing controllers Loading 'move_group/ApplyPlanningSceneService'... Loading 'move_group/ClearOctomapService'... Loading 'move_group/MoveGroupCartesianPathService'... Loading 'move_group/MoveGroupExecuteTrajectoryAction'... Loading 'move_group/MoveGroupGetPlanningSceneService'... Loading 'move_group/MoveGroupKinematicsService'... Loading 'move_group/MoveGroupMoveAction'... Loading 'move_group/MoveGroupPickPlaceAction'... Loading 'move_group/MoveGroupPlanService'... Loading 'move_group/MoveGroupQueryPlannersService'... Loading 'move_group/MoveGroupStateValidationService'... [ INFO] [1762076622.955447007]: ******************************************************** * MoveGroup using: * - ApplyPlanningSceneService * - ClearOctomapService * - CartesianPathService * - ExecuteTrajectoryAction * - GetPlanningSceneService * - KinematicsService * - MoveAction * - PickPlaceAction * - MotionPlanService * - QueryPlannersService * - StateValidationService ******************************************************** [ INFO] [1762076622.955481294]: MoveGroup context using planning plugin ompl_interface/OMPLPlanner [ INFO] [1762076622.955495059]: MoveGroup context initialization complete You can start planning now! [ INFO] [1762076624.756700336]: Loading robot model 'aubo_i5'... [ WARN] [1762076624.756746443]: Skipping virtual joint 'base_link' because its child frame 'base_link' does not match the URDF frame 'world' [ INFO] [1762076624.756771581]: No root/virtual joint specified in SRDF. Assuming fixed joint [ INFO] [1762076625.264448710]: Starting planning scene monitor [ INFO] [1762076625.266530944]: Listening to '/move_group/monitored_planning_scene' [ INFO] [1762076625.686336464]: Constructing new MoveGroup connection for group 'manipulator_i5' in namespace '' [ INFO] [1762076626.877045917]: Ready to take commands for planning group manipulator_i5. [ INFO] [1762076626.877183789]: Looking around: no [ INFO] [1762076626.877245273]: Replanning: no [ INFO] [1762076643.174830115]: Combined planning and execution request received for MoveGroup action. Forwarding to planning and execution pipeline. [ INFO] [1762076643.175277031]: Planning attempt 1 of at most 1 [ INFO] [1762076643.176738253]: Using a sample_duration value of 0.005 [ INFO] [1762076643.180184117]: Planner configuration 'manipulator_i5' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed. [ INFO] [1762076643.182625931]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.182831809]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.182985284]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.183155704]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.196664477]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076643.196880812]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076643.197507341]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076643.197718645]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076643.198230738]: ParallelPlan::solve(): Solution found by one or more threads in 0.016307 seconds [ INFO] [1762076643.199047044]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.199217949]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.199394720]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.199539848]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.203516801]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076643.203977054]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076643.204789531]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076643.205275657]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076643.205579430]: ParallelPlan::solve(): Solution found by one or more threads in 0.006801 seconds [ INFO] [1762076643.206133622]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.206314425]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076643.210023941]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076643.210188376]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076643.210713824]: ParallelPlan::solve(): Solution found by one or more threads in 0.004754 seconds [ INFO] [1762076643.226914630]: SimpleSetup: Path simplification took 0.015846 seconds and changed from 3 to 2 states [ INFO] [1762076643.229838490]: Interpolated time exceeds original trajectory (quitting), original: 1.64812 final interpolated time: 1.65 [ INFO] [1762076643.229965778]: Uniform sampling, resample duraction: 0.005 input traj. size: 4 output traj. size: 331 [ INFO] [1762076643.241589040]: Disabling trajectory recording [ INFO] [1762076643.249267809]: Received new goal [ INFO] [1762076643.249383221]: Publishing trajectory [INFO] [1762076643.254872]: Received trajectory with 331 points [INFO] [1762076643.257125]: Velocity scale factor: 1.0 [ERROR] [1762076643.263622]: Unexpected exception: int() argument must be a string, a bytes-like object or a number, not 'Duration' [INFO] [1762076643.284360]: Added new trajectory with 331 points [ INFO] [1762076645.072204824]: Inside goal constraints, stopped moving, return success for action [ INFO] [1762076645.073066959]: Controller aubo_i5_controller successfully finished [ INFO] [1762076645.131429419]: Completed trajectory execution with status SUCCEEDED ... [ INFO] [1762076645.141135973]: trajectory execution status: stop1 [ INFO] [1762076645.141144669]: Received event 'stop' [ERROR] [1762076645.141260396]: To transition to an aborted state, the goal must be in a preempting or active state, it is currently in state: 3 [ INFO] [1762076645.390044200]: Combined planning and execution request received for MoveGroup action. Forwarding to planning and execution pipeline. [ INFO] [1762076645.390232795]: Planning attempt 1 of at most 1 [ INFO] [1762076645.391435158]: Planner configuration 'manipulator_i5' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed. [ INFO] [1762076645.392501038]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.392765325]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.392996916]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.393232737]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.405152731]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076645.406623163]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076645.407075831]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076645.407262705]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076645.407673599]: ParallelPlan::solve(): Solution found by one or more threads in 0.015443 seconds [ INFO] [1762076645.408471126]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.408854773]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.409019127]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.409187565]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.410172203]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076645.414208863]: RRTConnect: Created 4 states (2 start + 2 goal) [ INFO] [1762076645.414576171]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076645.415038020]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076645.415411858]: ParallelPlan::solve(): Solution found by one or more threads in 0.007207 seconds [ INFO] [1762076645.415915443]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.416264033]: RRTConnect: Starting planning with 1 states already in datastructure [ INFO] [1762076645.420120818]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076645.420431203]: RRTConnect: Created 5 states (2 start + 3 goal) [ INFO] [1762076645.420816818]: ParallelPlan::solve(): Solution found by one or more threads in 0.005031 seconds [ INFO] [1762076645.428721527]: SimpleSetup: Path simplification took 0.007722 seconds and changed from 3 to 2 states [ INFO] [1762076645.431648243]: Interpolated time exceeds original trajectory (quitting), original: 2.78717 final interpolated time: 2.79 [ INFO] [1762076645.431751832]: Uniform sampling, resample duraction: 0.005 input traj. size: 4 output traj. size: 559 [ INFO] [1762076645.450663603]: Received new goal [ INFO] [1762076645.450732490]: Publishing trajectory [INFO] [1762076645.467334]: Received trajectory with 559 points [INFO] [1762076645.469154]: Velocity scale factor: 1.0 [ERROR] [1762076645.480497]: Unexpected exception: int() argument must be a string, a bytes-like object or a number, not 'Duration' [ERROR] [1762076645.492113]: Unexpected exception: int() argument must be a string, a bytes-like object or a number, not 'Duration' [INFO] [1762076645.493956]: Added new trajectory with 559 points [ERROR] [1762076645.494570]: Unexpected exception: int() argument must be a string, a bytes-like object or a number, not 'Duration'
最新发布
11-03
[root@node1 ~]# docker logs 85e71835d204 Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I0730 09:55:21.321773 1 server.go:632] external host was not specified, using 192.168.229.145 I0730 09:55:21.322237 1 server.go:182] Version: v1.20.9 I0730 09:55:21.547240 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:21.547259 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:21.548307 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:21.548318 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:21.550423 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:21.550477 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:21.550717 1 shared_informer.go:240] Waiting for caches to sync for node_authorizer I0730 09:55:22.138571 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.138611 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.144633 1 client.go:360] parsed scheme: "passthrough" I0730 09:55:22.144800 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:55:22.144828 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:55:22.146169 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.146192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.262294 1 instance.go:289] Using reconciler: lease I0730 09:55:22.262855 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.262883 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.275807 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.275869 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.283173 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.283213 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.290321 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.290362 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.296709 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.296788 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.301871 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.301902 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.307806 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.307838 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.313365 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.313392 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.319529 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.319557 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.324517 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.324536 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.331886 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.331925 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.339523 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.339587 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.346920 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.347079 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.352814 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.352836 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.358384 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.358443 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.364859 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.364886 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.370633 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.370680 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.376711 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.376734 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.382656 1 rest.go:131] the default service ipfamily for this cluster is: IPv4 I0730 09:55:22.504261 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.504291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.528497 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.528568 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.538368 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.538453 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.547338 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.547389 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.556834 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.556861 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.563972 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.564021 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.565476 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.565530 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.575236 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.575291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.602757 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.602821 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.611140 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.611192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.619221 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.619285 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.626667 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.626695 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.634819 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.634874 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.642543 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.642572 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.652458 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.652487 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.659228 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.659267 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.665840 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.665891 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.673386 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.673458 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.680779 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.680809 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.688424 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.688479 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.697923 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.697969 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.705098 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.705130 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.716766 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.716819 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.724379 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.724441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.731383 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.731449 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.737740 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.737796 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.744905 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.744961 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.756486 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.756510 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.763029 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.763074 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.774316 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.774343 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.781013 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.781063 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.787849 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.787898 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.799124 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.799153 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.808795 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.808849 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.837930 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.837964 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.851372 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.851441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.860141 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.860194 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.880323 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.880353 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.896262 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.896292 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.903896 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.903923 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.912570 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.912598 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.945966 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.946025 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.957156 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.957337 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.966709 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.966752 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.975373 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.975469 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.985516 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.985566 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.994780 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.994842 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.002474 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.002502 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.011694 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.011729 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.019030 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.019082 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.026717 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.026769 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.034206 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.034233 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] W0730 09:55:23.272674 1 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources. W0730 09:55:23.278301 1 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.286644 1 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.298757 1 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.301603 1 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.305468 1 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.307443 1 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.311186 1 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources. W0730 09:55:23.311215 1 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources. I0730 09:55:23.324600 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:23.324655 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:23.326518 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.326547 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.333081 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.333098 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:25.088013 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0730 09:55:25.088058 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:25.088324 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key I0730 09:55:25.088418 1 secure_serving.go:197] Serving securely on [::]:6443 I0730 09:55:25.088495 1 autoregister_controller.go:141] Starting autoregister controller I0730 09:55:25.088526 1 cache.go:32] Waiting for caches to sync for autoregister controller I0730 09:55:25.088557 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key I0730 09:55:25.088560 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0730 09:55:25.088615 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0730 09:55:25.088619 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0730 09:55:25.088714 1 available_controller.go:475] Starting AvailableConditionController I0730 09:55:25.088718 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0730 09:55:25.088746 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0730 09:55:25.088752 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0730 09:55:25.089052 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0730 09:55:25.090085 1 controller.go:83] Starting OpenAPI AggregationController I0730 09:55:25.090661 1 apf_controller.go:261] Starting API Priority and Fairness config controller I0730 09:55:25.095434 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0730 09:55:25.095446 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0730 09:55:25.095515 1 controller.go:86] Starting OpenAPI controller I0730 09:55:25.095531 1 naming_controller.go:291] Starting NamingConditionController I0730 09:55:25.095565 1 establishing_controller.go:76] Starting EstablishingController I0730 09:55:25.095579 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0730 09:55:25.095593 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0730 09:55:25.095604 1 crd_finalizer.go:266] Starting CRDFinalizer I0730 09:55:25.097471 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:25.097547 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt E0730 09:55:25.100371 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.229.145, ResourceVersion: 0, AdditionalErrorMsg: I0730 09:55:25.150947 1 shared_informer.go:247] Caches are synced for node_authorizer I0730 09:55:25.198738 1 apf_controller.go:266] Running API Priority and Fairness config worker I0730 09:55:25.198911 1 cache.go:39] Caches are synced for autoregister controller I0730 09:55:25.199324 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0730 09:55:25.199340 1 cache.go:39] Caches are synced for AvailableConditionController controller I0730 09:55:25.199353 1 shared_informer.go:247] Caches are synced for crd-autoregister I0730 09:55:25.199442 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0730 09:55:25.226411 1 controller.go:609] quota admission added evaluator for: namespaces I0730 09:55:26.087754 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0730 09:55:26.087813 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0730 09:55:26.099842 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0730 09:55:26.103129 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0730 09:55:26.103147 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0730 09:55:26.490805 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0730 09:55:26.524236 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0730 09:55:26.624643 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.229.145] I0730 09:55:26.625375 1 controller.go:609] quota admission added evaluator for: endpoints I0730 09:55:26.628224 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io I0730 09:55:28.611886 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io I0730 09:55:29.087009 1 controller.go:609] quota admission added evaluator for: serviceaccounts I0730 09:55:56.812958 1 client.go:360] parsed scheme: "passthrough" I0730 09:55:56.813001 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:55:56.813008 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:56:27.580990 1 client.go:360] parsed scheme: "passthrough" I0730 09:56:27.581052 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:56:27.581064 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:56:58.136069 1 client.go:360] parsed scheme: "passthrough" I0730 09:56:58.136122 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:56:58.136134 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:57:33.643030 1 client.go:360] parsed scheme: "passthrough" I0730 09:57:33.643108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:57:33.643119 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:58:15.787556 1 client.go:360] parsed scheme: "passthrough" I0730 09:58:15.787626 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:58:15.787637 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:58:46.620841 1 client.go:360] parsed scheme: "passthrough" I0730 09:58:46.620879 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:58:46.620884 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:59:28.044387 1 client.go:360] parsed scheme: "passthrough" I0730 09:59:28.044446 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:59:28.044453 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:00:12.134065 1 client.go:360] parsed scheme: "passthrough" I0730 10:00:12.134096 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:00:12.134102 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:00:52.210433 1 client.go:360] parsed scheme: "passthrough" I0730 10:00:52.210482 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:00:52.210491 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:01:33.985297 1 client.go:360] parsed scheme: "passthrough" I0730 10:01:33.985351 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:01:33.985369 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:02:17.490682 1 client.go:360] parsed scheme: "passthrough" I0730 10:02:17.490719 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:02:17.490725 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:02:53.017984 1 client.go:360] parsed scheme: "passthrough" I0730 10:02:53.018108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:02:53.018138 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:03:31.267486 1 client.go:360] parsed scheme: "passthrough" I0730 10:03:31.267537 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:03:31.267543 1 clientconn.go:948] ClientConn switching balancer to "pick_first" [root@node1 ~]# [root@node1 ~]# docker logs 5fb4dcb49e9a [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2025-07-30 09:55:21.067915 I | etcdmain: etcd Version: 3.4.13 2025-07-30 09:55:21.067944 I | etcdmain: Git SHA: ae9734ed2 2025-07-30 09:55:21.067948 I | etcdmain: Go Version: go1.12.17 2025-07-30 09:55:21.067950 I | etcdmain: Go OS/Arch: linux/amd64 2025-07-30 09:55:21.067953 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2025-07-30 09:55:21.068013 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 2025-07-30 09:55:21.069505 I | embed: name = node1 2025-07-30 09:55:21.069518 I | embed: data dir = /var/lib/etcd 2025-07-30 09:55:21.069522 I | embed: member dir = /var/lib/etcd/member 2025-07-30 09:55:21.069525 I | embed: heartbeat = 100ms 2025-07-30 09:55:21.069528 I | embed: election = 1000ms 2025-07-30 09:55:21.069530 I | embed: snapshot count = 10000 2025-07-30 09:55:21.069555 I | embed: advertise client URLs = https://192.168.229.145:2379 2025-07-30 09:55:21.086105 I | etcdserver: starting member 25a5889883ba023f in cluster dbb04bfad86388d6 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=() raft2025/07/30 09:55:21 INFO: 25a5889883ba023f became follower at term 0 raft2025/07/30 09:55:21 INFO: newRaft 25a5889883ba023f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2025/07/30 09:55:21 INFO: 25a5889883ba023f became follower at term 1 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=(2712724539187003967) 2025-07-30 09:55:21.112113 W | auth: simple token is not cryptographically signed 2025-07-30 09:55:21.115844 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2025-07-30 09:55:21.123343 I | etcdserver: 25a5889883ba023f as single-node; fast-forwarding 9 ticks (election ticks 10) 2025-07-30 09:55:21.124008 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 2025-07-30 09:55:21.124333 I | embed: listening for metrics on http://127.0.0.1:2381 2025-07-30 09:55:21.124737 I | embed: listening for peers on 192.168.229.145:2380 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=(2712724539187003967) 2025-07-30 09:55:21.125960 I | etcdserver/membership: added member 25a5889883ba023f [https://192.168.229.145:2380] to cluster dbb04bfad86388d6 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f is starting a new election at term 1 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f became candidate at term 2 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f received MsgVoteResp from 25a5889883ba023f at term 2 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f became leader at term 2 raft2025/07/30 09:55:22 INFO: raft.node: 25a5889883ba023f elected leader 25a5889883ba023f at term 2 2025-07-30 09:55:22.110626 I | etcdserver: setting up the initial cluster version to 3.4 2025-07-30 09:55:22.111512 N | etcdserver/membership: set the initial cluster version to 3.4 2025-07-30 09:55:22.111566 I | etcdserver/api: enabled capabilities for version 3.4 2025-07-30 09:55:22.111617 I | etcdserver: published {Name:node1 ClientURLs:[https://192.168.229.145:2379]} to cluster dbb04bfad86388d6 2025-07-30 09:55:22.111706 I | embed: ready to serve client requests 2025-07-30 09:55:22.111722 I | embed: ready to serve client requests 2025-07-30 09:55:22.113722 I | embed: serving client requests on 192.168.229.145:2379 2025-07-30 09:55:22.113948 I | embed: serving client requests on 127.0.0.1:2379 2025-07-30 09:55:37.917153 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:55:41.816410 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:55:51.816930 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:01.816886 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:11.817562 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:21.817717 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:31.816595 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:41.816625 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:51.817154 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:01.817686 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:11.816756 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:21.817343 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:31.817003 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:41.816873 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:51.817207 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:01.816804 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:11.817017 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:21.816642 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:31.816838 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:41.816347 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:51.816459 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:01.817140 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:11.816554 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:21.816869 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:31.818716 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:41.816705 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:51.817851 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:01.816829 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:11.816569 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:21.816748 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:31.817200 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:41.816544 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:51.816793 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:01.817000 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:11.816522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:21.816775 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:31.816684 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:41.816727 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:51.817073 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:01.817473 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:11.816961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:21.817283 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:31.816549 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:41.817031 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:51.816960 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:01.816855 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:11.816837 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:21.816528 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:31.816693 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:41.816522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:51.817027 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:04:01.816739 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:04:11.816916 I | etcdserver/api/etcdhttp: /health OK (status code 200) [root@node1 ~]# docker logs ff9b8d7587aa Flag --port has been deprecated, see --secure-port instead. I0730 09:55:21.608626 1 serving.go:331] Generated self-signed cert in-memory I0730 09:55:21.958575 1 controllermanager.go:176] Version: v1.20.9 I0730 09:55:21.960121 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0730 09:55:21.960150 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:21.961003 1 secure_serving.go:197] Serving securely on 127.0.0.1:10257 I0730 09:55:21.961148 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0730 09:55:21.961191 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... E0730 09:55:25.271255 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system" I0730 09:55:28.613626 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager I0730 09:55:28.613892 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="node1_503ebc03-4f42-47e4-9c9d-e9064b26d78b became leader" I0730 09:55:29.083585 1 shared_informer.go:240] Waiting for caches to sync for tokens I0730 09:55:29.183810 1 shared_informer.go:247] Caches are synced for tokens I0730 09:55:29.540299 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges I0730 09:55:29.540448 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io I0730 09:55:29.540486 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch I0730 09:55:29.540502 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0730 09:55:29.540561 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0730 09:55:29.540575 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints I0730 09:55:29.540626 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps I0730 09:55:29.540672 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps I0730 09:55:29.540690 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0730 09:55:29.540706 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io W0730 09:55:29.540717 1 shared_informer.go:494] resyncPeriod 20h21m59.062252565s is smaller than resyncCheckPeriod 22h46m52.416059627s and the informer has already started. Changing it to 22h46m52.416059627s I0730 09:55:29.540798 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I0730 09:55:29.540832 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps I0730 09:55:29.540845 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch W0730 09:55:29.540850 1 shared_informer.go:494] resyncPeriod 13h23m0.496914786s is smaller than resyncCheckPeriod 22h46m52.416059627s and the informer has already started. Changing it to 22h46m52.416059627s I0730 09:55:29.540864 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts I0730 09:55:29.540876 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps I0730 09:55:29.540886 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps I0730 09:55:29.540898 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I0730 09:55:29.540909 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0730 09:55:29.540920 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates I0730 09:55:29.540980 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0730 09:55:29.541016 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions I0730 09:55:29.541024 1 controllermanager.go:554] Started "resourcequota" I0730 09:55:29.541243 1 resource_quota_controller.go:273] Starting resource quota controller I0730 09:55:29.541254 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0730 09:55:29.541267 1 resource_quota_monitor.go:304] QuotaMonitor running I0730 09:55:29.546774 1 node_lifecycle_controller.go:77] Sending events to api server E0730 09:55:29.546816 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided W0730 09:55:29.546824 1 controllermanager.go:546] Skipping "cloud-node-lifecycle" W0730 09:55:29.546837 1 controllermanager.go:546] Skipping "ttl-after-finished" W0730 09:55:29.546843 1 controllermanager.go:546] Skipping "ephemeral-volume" I0730 09:55:29.553569 1 controllermanager.go:554] Started "garbagecollector" I0730 09:55:29.553994 1 garbagecollector.go:142] Starting garbage collector controller I0730 09:55:29.554019 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0730 09:55:29.554078 1 graph_builder.go:289] GraphBuilder running I0730 09:55:29.559727 1 controllermanager.go:554] Started "statefulset" I0730 09:55:29.559830 1 stateful_set.go:146] Starting stateful set controller I0730 09:55:29.559837 1 shared_informer.go:240] Waiting for caches to sync for stateful set I0730 09:55:29.570038 1 controllermanager.go:554] Started "bootstrapsigner" I0730 09:55:29.570166 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer E0730 09:55:29.577020 1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0730 09:55:29.577049 1 controllermanager.go:546] Skipping "service" I0730 09:55:29.583964 1 controllermanager.go:554] Started "clusterrole-aggregation" I0730 09:55:29.584128 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I0730 09:55:29.584145 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I0730 09:55:29.590189 1 controllermanager.go:554] Started "podgc" I0730 09:55:29.590277 1 gc_controller.go:89] Starting GC controller I0730 09:55:29.590285 1 shared_informer.go:240] Waiting for caches to sync for GC I0730 09:55:29.611665 1 controllermanager.go:554] Started "namespace" I0730 09:55:29.611840 1 namespace_controller.go:200] Starting namespace controller I0730 09:55:29.611851 1 shared_informer.go:240] Waiting for caches to sync for namespace I0730 09:55:29.644607 1 controllermanager.go:554] Started "replicaset" I0730 09:55:29.644643 1 replica_set.go:182] Starting replicaset controller I0730 09:55:29.644648 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet I0730 09:55:29.794545 1 controllermanager.go:554] Started "pvc-protection" I0730 09:55:29.794636 1 pvc_protection_controller.go:110] Starting PVC protection controller I0730 09:55:29.794643 1 shared_informer.go:240] Waiting for caches to sync for PVC protection I0730 09:55:29.947340 1 controllermanager.go:554] Started "endpointslicemirroring" I0730 09:55:29.947470 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I0730 09:55:29.947481 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I0730 09:55:30.096177 1 controllermanager.go:554] Started "deployment" I0730 09:55:30.096245 1 deployment_controller.go:153] Starting deployment controller I0730 09:55:30.096252 1 shared_informer.go:240] Waiting for caches to sync for deployment I0730 09:55:30.143147 1 node_ipam_controller.go:91] Sending events to api server. I0730 09:55:30.689188 1 request.go:655] Throttling request took 1.048578443s, request: GET:https://192.168.229.145:6443/apis/storage.k8s.io/v1?timeout=32s I0730 09:55:40.164520 1 range_allocator.go:82] Sending events to api server. I0730 09:55:40.164715 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I0730 09:55:40.164875 1 controllermanager.go:554] Started "nodeipam" I0730 09:55:40.164991 1 node_ipam_controller.go:159] Starting ipam controller I0730 09:55:40.164996 1 shared_informer.go:240] Waiting for caches to sync for node I0730 09:55:40.170277 1 controllermanager.go:554] Started "serviceaccount" I0730 09:55:40.170396 1 serviceaccounts_controller.go:117] Starting service account controller I0730 09:55:40.170413 1 shared_informer.go:240] Waiting for caches to sync for service account I0730 09:55:40.187598 1 controllermanager.go:554] Started "horizontalpodautoscaling" I0730 09:55:40.187715 1 horizontal.go:169] Starting HPA controller I0730 09:55:40.187729 1 shared_informer.go:240] Waiting for caches to sync for HPA I0730 09:55:40.193364 1 controllermanager.go:554] Started "ttl" I0730 09:55:40.193485 1 ttl_controller.go:121] Starting TTL controller I0730 09:55:40.193491 1 shared_informer.go:240] Waiting for caches to sync for TTL I0730 09:55:40.194930 1 node_lifecycle_controller.go:380] Sending events to api server. I0730 09:55:40.195077 1 taint_manager.go:163] Sending events to api server. I0730 09:55:40.195125 1 node_lifecycle_controller.go:508] Controller will reconcile labels. I0730 09:55:40.195140 1 controllermanager.go:554] Started "nodelifecycle" W0730 09:55:40.195149 1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. W0730 09:55:40.195152 1 controllermanager.go:546] Skipping "route" I0730 09:55:40.195358 1 node_lifecycle_controller.go:542] Starting node controller I0730 09:55:40.195365 1 shared_informer.go:240] Waiting for caches to sync for taint I0730 09:55:40.204190 1 controllermanager.go:554] Started "root-ca-cert-publisher" I0730 09:55:40.204295 1 publisher.go:98] Starting root CA certificate configmap publisher I0730 09:55:40.204302 1 shared_informer.go:240] Waiting for caches to sync for crt configmap I0730 09:55:40.213208 1 controllermanager.go:554] Started "endpointslice" I0730 09:55:40.213325 1 endpointslice_controller.go:237] Starting endpoint slice controller I0730 09:55:40.213331 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I0730 09:55:40.221793 1 controllermanager.go:554] Started "replicationcontroller" I0730 09:55:40.221900 1 replica_set.go:182] Starting replicationcontroller controller I0730 09:55:40.221906 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController I0730 09:55:40.233083 1 controllermanager.go:554] Started "disruption" I0730 09:55:40.233218 1 disruption.go:331] Starting disruption controller I0730 09:55:40.233225 1 shared_informer.go:240] Waiting for caches to sync for disruption I0730 09:55:40.243452 1 controllermanager.go:554] Started "cronjob" I0730 09:55:40.243529 1 cronjob_controller.go:96] Starting CronJob Manager I0730 09:55:40.454779 1 controllermanager.go:554] Started "csrcleaner" I0730 09:55:40.454818 1 cleaner.go:82] Starting CSR cleaner controller I0730 09:55:40.605277 1 controllermanager.go:554] Started "persistentvolume-binder" I0730 09:55:40.605328 1 pv_controller_base.go:307] Starting persistent volume controller I0730 09:55:40.605334 1 shared_informer.go:240] Waiting for caches to sync for persistent volume I0730 09:55:40.754979 1 controllermanager.go:554] Started "pv-protection" I0730 09:55:40.755043 1 pv_protection_controller.go:83] Starting PV protection controller I0730 09:55:40.755055 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0730 09:55:40.904412 1 controllermanager.go:554] Started "endpoint" I0730 09:55:40.904457 1 endpoints_controller.go:184] Starting endpoint controller I0730 09:55:40.904491 1 shared_informer.go:240] Waiting for caches to sync for endpoint I0730 09:55:41.056168 1 controllermanager.go:554] Started "daemonset" I0730 09:55:41.056220 1 daemon_controller.go:285] Starting daemon sets controller I0730 09:55:41.056226 1 shared_informer.go:240] Waiting for caches to sync for daemon sets I0730 09:55:41.103979 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I0730 09:55:41.103997 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0730 09:55:41.104013 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104264 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I0730 09:55:41.104271 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0730 09:55:41.104299 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104766 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I0730 09:55:41.104772 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0730 09:55:41.104780 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104874 1 controllermanager.go:554] Started "csrsigning" I0730 09:55:41.104912 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I0730 09:55:41.104917 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0730 09:55:41.104937 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.255923 1 controllermanager.go:554] Started "tokencleaner" I0730 09:55:41.255988 1 tokencleaner.go:118] Starting token cleaner controller I0730 09:55:41.255993 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner I0730 09:55:41.255997 1 shared_informer.go:247] Caches are synced for token_cleaner I0730 09:55:41.405554 1 controllermanager.go:554] Started "attachdetach" I0730 09:55:41.405622 1 attach_detach_controller.go:329] Starting attach detach controller I0730 09:55:41.405628 1 shared_informer.go:240] Waiting for caches to sync for attach detach I0730 09:55:41.554527 1 controllermanager.go:554] Started "persistentvolume-expander" I0730 09:55:41.554606 1 expand_controller.go:310] Starting expand controller I0730 09:55:41.554612 1 shared_informer.go:240] Waiting for caches to sync for expand I0730 09:55:41.705883 1 controllermanager.go:554] Started "job" I0730 09:55:41.705932 1 job_controller.go:148] Starting job controller I0730 09:55:41.705938 1 shared_informer.go:240] Waiting for caches to sync for job I0730 09:55:41.754446 1 controllermanager.go:554] Started "csrapproving" I0730 09:55:41.754722 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0730 09:55:41.755111 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I0730 09:55:41.755119 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I0730 09:55:41.774691 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0730 09:55:41.784409 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0730 09:55:41.793636 1 shared_informer.go:247] Caches are synced for TTL I0730 09:55:41.804153 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0730 09:55:41.804368 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0730 09:55:41.804368 1 shared_informer.go:247] Caches are synced for crt configmap I0730 09:55:41.804860 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0730 09:55:41.805048 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0730 09:55:41.811971 1 shared_informer.go:247] Caches are synced for namespace I0730 09:55:41.847535 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0730 09:55:41.855186 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0730 09:55:41.865229 1 shared_informer.go:247] Caches are synced for node I0730 09:55:41.865293 1 range_allocator.go:172] Starting range CIDR allocator I0730 09:55:41.865298 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0730 09:55:41.865301 1 shared_informer.go:247] Caches are synced for cidrallocator I0730 09:55:41.870566 1 shared_informer.go:247] Caches are synced for service account E0730 09:55:41.877240 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0730 09:55:41.954718 1 shared_informer.go:247] Caches are synced for expand I0730 09:55:41.955116 1 shared_informer.go:247] Caches are synced for PV protection I0730 09:55:42.013590 1 shared_informer.go:247] Caches are synced for endpoint_slice I0730 09:55:42.022213 1 shared_informer.go:247] Caches are synced for ReplicationController I0730 09:55:42.033328 1 shared_informer.go:247] Caches are synced for disruption I0730 09:55:42.033350 1 disruption.go:339] Sending events to api server. I0730 09:55:42.041423 1 shared_informer.go:247] Caches are synced for resource quota I0730 09:55:42.045150 1 shared_informer.go:247] Caches are synced for ReplicaSet I0730 09:55:42.054792 1 shared_informer.go:247] Caches are synced for resource quota I0730 09:55:42.056647 1 shared_informer.go:247] Caches are synced for daemon sets I0730 09:55:42.059966 1 shared_informer.go:247] Caches are synced for stateful set I0730 09:55:42.088174 1 shared_informer.go:247] Caches are synced for HPA I0730 09:55:42.090312 1 shared_informer.go:247] Caches are synced for GC I0730 09:55:42.094795 1 shared_informer.go:247] Caches are synced for PVC protection I0730 09:55:42.095608 1 shared_informer.go:247] Caches are synced for taint I0730 09:55:42.095970 1 taint_manager.go:187] Starting NoExecuteTaintManager I0730 09:55:42.096344 1 shared_informer.go:247] Caches are synced for deployment I0730 09:55:42.104584 1 shared_informer.go:247] Caches are synced for endpoint I0730 09:55:42.105378 1 shared_informer.go:247] Caches are synced for persistent volume I0730 09:55:42.105717 1 shared_informer.go:247] Caches are synced for attach detach I0730 09:55:42.105975 1 shared_informer.go:247] Caches are synced for job I0730 09:55:42.215619 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0730 09:55:42.516093 1 shared_informer.go:247] Caches are synced for garbage collector I0730 09:55:42.554168 1 shared_informer.go:247] Caches are synced for garbage collector I0730 09:55:42.554210 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage [root@node1 ~]# docker logs 3112cf504b56 I0730 09:55:21.400259 1 serving.go:331] Generated self-signed cert in-memory W0730 09:55:25.261574 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0730 09:55:25.261664 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0730 09:55:25.261682 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous. W0730 09:55:25.261687 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0730 09:55:25.292394 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0730 09:55:25.292498 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:25.292528 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:25.293029 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0730 09:55:25.295710 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0730 09:55:25.297091 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0730 09:55:25.297190 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0730 09:55:25.297221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0730 09:55:25.297272 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0730 09:55:25.297376 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0730 09:55:25.297420 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0730 09:55:25.297718 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0730 09:55:25.297832 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0730 09:55:25.297872 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0730 09:55:25.297950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0730 09:55:25.299623 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0730 09:55:26.134757 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0730 09:55:26.376729 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0730 09:55:26.380160 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0730 09:55:26.892586 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:28.692876 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler... I0730 09:55:28.699723 1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler 这是我的日志帮我看看有什么问题
07-31
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值