setObjectForKey: object cannot be nil (key: UIImagePickerControllerOriginalImage)

解决iOS拍照崩溃问题
本文详细探讨了iOS应用中使用UIImagePickerController拍照时遇到的崩溃问题,尤其是在开启闪光灯并切换到后台的情况下。通过分析Xcode日志及尝试多种解决方案,最终发现禁用闪光灯能有效减少问题发生的概率。

开发的APP使用友盟进行crash统计,最近看到以上的问题,但是自己无法重现,搜索后找到了如下信息:
http://stackoverflow.com/questions/29836488/uiimagepickercontrolleroriginalimage-nil-causing-crash-on-photo-capture
http://openradar.appspot.com/19953748

大概可以推测出该问题出现的原因是在拍照的同时锁屏或回到桌面
但是我尝试之后还是无法重现,
最后在下面的帖子里看到有一个关键点是闪光灯:
https://openradar.appspot.com/28108858

于是我拍照前打开闪光灯,然后拍照,同时锁屏。
这时候在Xcode的console日志中可以看到一个报错信息:

* Camera: Error capturing still image (Error Domain=AVFoundationErrorDomain Code=-11800 “这项操作无法完成” UserInfo=0x170675d40 {NSUnderlyingError=0x17064a950 “未能完成操作。(“OSStatus”错误 -16800。)”, NSLocalizedFailureReason=发生未知错误(-16800), NSLocalizedDescription=这项操作无法完成})
2016-11-11 11:20:46.524 GDTaxService[1971:938033] * Camera: captureStillImageSurfaceAsync error, not continuing (Error Domain=AVFoundationErrorDomain Code=-11800 “这项操作无法完成” UserInfo=0x170675d40 {NSUnderlyingError=0x17064a950 “未能完成操作。(“OSStatus”错误 -16800。)”, NSLocalizedFailureReason=发生未知错误(-16800), NSLocalizedDescription=这项操作无法完成})

貌似是拍照没有完成。
但是解锁屏幕后看到相机已经进入预览界面,不过界面是完全的黑色。
然后点击“使用照片”按钮,问题就重现了:

* setObjectForKey: object cannot be nil (key: UIImagePickerControllerOriginalImage)
(null)
((
0 CoreFoundation 0x000000018304c2f4 + 160
1 libobjc.A.dylib 0x00000001948780e4 objc_exception_throw + 60
2 CoreFoundation 0x0000000182f35428 + 972
3 PhotoLibrary 0x000000018e8604e8 + 772
4 PhotoLibrary 0x000000018e860188 PLNotifyImagePickerOfImageAvailability + 28
5 PhotoLibrary 0x000000018e8ad728 + 308
6 PhotoLibrary 0x000000018e8ad540 + 596
7 UIKit 0x0000000187ac1404 + 96
8 GDTaxService 0x00000001000790c4 -[UIControl(SVPreventRepeat) _sv_sendAction:to:forEvent:] + 568
9 UIKit 0x0000000187aaa4e0 + 612
10 UIKit 0x0000000187ac0da0 + 592
11 UIKit 0x0000000187ac0a2c + 700
12 UIKit 0x0000000187ab9f68 + 684
13 UIKit 0x0000000187a8d18c + 264
14 UIKit 0x0000000187d2e324 + 15424
15 UIKit 0x0000000187a8b6a0 + 1716
16 CoreFoundation 0x0000000183004240 + 24
17 CoreFoundation 0x00000001830034e4 + 264
18 CoreFoundation 0x0000000183001594 + 712
19 CoreFoundation 0x0000000182f2d2d4 CFRunLoopRunSpecific + 396
20 GraphicsServices 0x000000018c7436fc GSEventRunModal + 168
21 UIKit 0x0000000187af2fac UIApplicationMain + 1488
22 GDTaxService 0x000000010010f758 main + 124
23 libdyld.dylib 0x0000000194ef6a08 + 4
)

重问题的重现步骤来分析,问题的原因应该是拍照进行中锁屏导致应用进入后台,拍照没有完成,所以出现了第一个问题;但是拍照程序只是捕获了错误没有进行处理,所以相机界面在出错后还是进入了预览界面,但是照片对象是nil。这时候点击“使用照片”就出现了上面的问题。

尝试解决:
一、按照stackoverflow里的一个答复,在拍照前进行判断应用的状态:

if ([UIApplication sharedApplication].applicationState == UIApplicationStateActive) {
        [self takePicture];
    }

没有效果,因为开了闪光灯后拍照时间长,只要先按拍照按钮,这里的判断就没有效果,在闪光灯亮时锁屏,问题还是必现。
二、自定义相机界面,自己画界面并完成“使用照片”的功能代码;
同样不行。
如果自定义相机界面,在同样的操作步骤下,在照片返回给自定义界面前,系统的“使用照片”相关代码已默认触发,因此无法摆脱此问题。
三、禁用闪光灯,降低问题重现几率
禁用闪光灯后,拍照时间很短,问题基本是很难重现的。

给你一个参考示例代码,使用的是相同的组件库,请检查写法确保一次正确 import UIKit class EditOrganizationMainViewController: SurveillanceCommonTableController { // MARK: - 属性 private lazy var orgHeaderView: EditOrganizationOrgHeaderView = { let view = EditOrganizationOrgHeaderView() return view }() private lazy var appContext = { TPAppContextFactory.shared() }() // 组织信息 private var orgInfo = TPSSVMSOrganizationInfo() private var userManagementPermission: Bool{ get{ #if APP_VIGI if !appContext.currentRolePermission.userManagement { return false } return true #else if orgInfo.vmsOrgType == .omadaSurOnlyCloud{ if !appContext.currentRolePermission.userManagement { return false } } else { if appContext.accountInfo.centralPermission.centralUsers == .block { return false } } return true #endif } } // 必须使用该初始化方式 convenience init(orgInfo: TPSSVMSOrganizationInfo) { self.init() self.orgInfo = orgInfo } deinit { print("EditOrganizationMainViewController deinit") } // MARK: - View override func tpbSetupSubviews() { super.tpbSetupSubviews() view.addSubview(orgHeaderView) } override func tpbMakeConstraint() { // 暂不执行super不然会显示不了title // super.tpbMakeConstraint() orgHeaderView.snp.makeConstraints { make in make.top.leading.trailing.equalTo(view.safeAreaLayoutGuide) } tableView.snp.remakeConstraints { make in make.top.equalTo(orgHeaderView.snp.bottom) make.leading.trailing.equalTo(view.safeAreaLayoutGuide) make.bottom.equalTo(view) } } override func viewDidLoad() { super.viewDidLoad() requestRolePermission() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // TODO: 接入Omada Central之后需要判定是否隐藏 navigationController?.setNavigationBarHidden(true, animated: false) // 获取基本信息 getOrgDetailInfo() reloadView() } private func reloadView() { orgHeaderView.updateWith(orgName: orgInfo.vmsName, orgCategory: orgInfo.vmsOrgCategory) { [weak self] in if let org = self?.orgInfo { let vc = EditOrganizationSettingsViewController(orgInfo: org) self?.navigationController?.pushViewController(vc, animated: true) } } reloadData() } // MARK: - dateModel private func reloadData() { var tempSectionArray = [TPBTableSectionModel]() // section0 // let section0 = TPBTableSectionModel() // var section0CellModelArray = [TPBBaseTableCellModel]() // // // // section0.cellModelArray = section0CellModelArray // tempSectionArray.append(section0) #if APP_OMADA // section1 let section1 = TPBTableSectionModel() var section1CellModelArray = [TPBBaseTableCellModel]() if orgInfo.networkSelect { let omadaNetworkCellModel = TPBBaseTableCellModel.titleSubtitle(withIcon: TPImageLiteral("central_network"), title: LocalizedString(key: addOrganizationOmadaNetwork), subtitle: nil, value: orgInfo.networkVersion, infoItemArray: nil, action: nil, showIndicator: false) section1CellModelArray.append(omadaNetworkCellModel) } if orgInfo.surveillanceSelect { let omadaGuardCellModel = TPBBaseTableCellModel.titleSubtitle(withIcon: TPImageLiteral("central_surveillance"), title: LocalizedString(key: shareSocialPlatformWebDetail), subtitle: nil, value: orgInfo.surveillanceVersion, infoItemArray: nil, action: nil, showIndicator: false) section1CellModelArray.append(omadaGuardCellModel) } section1.cellModelArray = section1CellModelArray section1.headerTitle = LocalizedString(key: editOrganizationApplication) tempSectionArray.append(section1) #endif // section2 let section2 = TPBTableSectionModel() var section2CellModelArray = [TPBBaseTableCellModel]() if currentOrgIsGuardOnly || TPSSAppContext.shared.accountInfo.centralPermission.centralGlobalSetting != .block { let generalSettingCellModel = TPBBaseTableCellModel.titleSubtitle(withIcon: nil, title: LocalizedString(key: configTitle), subtitle: nil, value: nil, infoItemArray: nil, action: { [weak self] _, _ in if let org = self?.orgInfo { let vc = EditOrganizationSettingsViewController(orgInfo: org) self?.navigationController?.pushViewController(vc, animated: true) } }, showIndicator: true) section2CellModelArray.append(generalSettingCellModel) } let siteManagementCellModel = TPBBaseTableCellModel.titleSubtitle(withIcon: nil, title: LocalizedString(key: editOrganizationSiteManagement), subtitle: nil, value: nil, infoItemArray: nil, action: { [weak self] _, _ in let vc = OmadaSiteManagementListViewController() self?.navigationController?.pushViewController(vc, animated: true) }, showIndicator: true) section2CellModelArray.append(siteManagementCellModel) let userManagementCellModel = TPBBaseTableCellModel.titleSubtitle(withIcon: nil, title: LocalizedString(key: editOrganizationUserManagement), subtitle: nil, value: nil, infoItemArray: nil, action: { [weak self] _, _ in self?.clickUserManagement() }, showIndicator: true) section2CellModelArray.append(userManagementCellModel) section2.cellModelArray = section2CellModelArray tempSectionArray.append(section2) // section3 let section3 = TPBTableSectionModel() var section3CellModelArray = [TPBBaseTableCellModel]() #if APP_VIGI if let hasCloudVMSOrganization = UserDefaults.standard.object(forKey: kHasCloudVMSOrganization) as? Bool { if TPSSAppContext.shared.isSupportCreateOrg || hasCloudVMSOrganization { let createOrgCellModel = TPBBaseTableCellModel.action(withIcon: nil, title: LocalizedString(key: createNewOrganization), action: { _, _ in let vc = AddOrganizationCreateOrgViewController() if let rootVC = UIApplication.shared.keyWindow?.rootViewController as? MainModuleTabBarController { rootVC.presentFullScreenViewController(vc) } }, actionEnabled: true, titleColor: .tpbPrimary) section3CellModelArray.append(createOrgCellModel) } } #else if appContext.accountInfo.createCuardServiceDisable != 1 { } #endif #if APP_VIGI let title = appContext.accountInfo.roleName == DefaultRoleName.owner.rawValue ? LocalizedString(key: editOrganizationDeleteOrganization) : LocalizedString(key: editOrganizationLeaveOrganization) #else let title = orgInfo.role == .owner ? LocalizedString(key: editOrganizationDeleteOrganization) : LocalizedString(key: editOrganizationLeaveOrganization) #endif let deleteCellModel = TPBBaseTableCellModel.action(withIcon: nil, title: title, action: { [weak self] _, _ in self?.showDeleteAlert() }, actionEnabled: true, titleColor: .tpbRed) section3CellModelArray.append(deleteCellModel) section3.cellModelArray = section3CellModelArray tempSectionArray.append(section3) sectionArray = tempSectionArray } // MARK: - 点击事件 @objc private func clickBack() { dismiss(animated: true) } private func showDeleteAlert() { #if APP_VIGI // 如果是owner则需要获取VmsStatistics,根据返回结果决定弹窗表现;如果是user则直接弹窗 if appContext.accountInfo.roleName == DefaultRoleName.owner.rawValue { RequestAsyncHandler.default.perform({ () -> TPSSCode in ToastView.showLoadingToast(cirleWithMessage: nil) return appContext.requestGetVmsStatistics() }, callback: { [weak self] result in guard let self = self else { return } ToastView.dismissLoadingToast() if result.isSuccess() { // 根据返回结果决定弹窗表现 self.showVMSOwnerDeleteAlert() } else { var errorMessage = result.error() if result.notification?.parameter0() == TPSS_EC_TIMEOUT || result.notification?.parameter0() == TPSS_EC_NET_ERROR || errorMessage == "" { errorMessage = LocalizedString(key: deviceSettingWifiNvrGetStatusFail) } ToastView.showTopWarningWithLeftIconToast(title: errorMessage, type: .info, existTime: 2) } }) } else { tpbShowAlert(withTitle: LocalizedString(key: editOrganizationLeaveOrganizationAlertTitle), messsage: nil, cancelText: LocalizedString(key: commonCancel), cancelCallback: nil, deleteText: LocalizedString(key: deviceSettingUpgradeLeave)) { [weak self] in self?.deleteOrg() } } #else let title = orgInfo.role == .owner ? LocalizedString(key: editOrganizationDeleteOrganizationAlertTitle) : LocalizedString(key: editOrganizationLeaveOrganizationAlertTitle) let deleteText = orgInfo.role == .owner ? LocalizedString(key: commonDelete) : LocalizedString(key: deviceSettingUpgradeLeave) tpbShowAlert(withTitle: title, messsage: nil, cancelText: LocalizedString(key: commonCancel), cancelCallback: nil, deleteText: deleteText) { [weak self] in self?.deleteOrg() } #endif } private func showVMSOwnerDeleteAlert() { if let vmsStatistics = appContext.getVmsStatistics() { let isCloudBackUp = TPSSAppContext.shared.getVMSIsSupportCloudStorage() != 0 if vmsStatistics.deviceNum == 0 { // 设备数为0 if vmsStatistics.orgManagerNum == 0 { // 用户为0 if isCloudBackUp { // 云备份开启 if vmsStatistics.hasActivePackages { showVMSOwnerDeleteUnableToDeleteAlertView() } else if vmsStatistics.hasOrgBackupVideos { showVMSOwnerDeleteCanDeleteAlertView() } else { showVMSOwnerDeleteCanDeleteAlertView() } } else { showVMSOwnerDeleteCanDeleteAlertView() } } else { showVMSOwnerDeleteCanDeleteAlertView() } } else { if isCloudBackUp { // 云备份开启 if vmsStatistics.hasActivePackages { showVMSOwnerDeleteUnableToDeleteAlertView() } else { showVMSOwnerDeleteUnableToDeleteAlertView() } } else { showVMSOwnerDeleteUnableToDeleteAlertView() } } } } // 可以删除提示 private func showVMSOwnerDeleteCanDeleteAlertView() { tpbShowAlert(withTitle: LocalizedString(key: editOrganizationVMSDeleteOrganizationTip), messsage: nil, cancelText: LocalizedString(key: commonCancel), cancelCallback: nil, deleteText: LocalizedString(key: commonDelete)) { [weak self] in self?.deleteOrg() } } // 不可以删除提示 private func showVMSOwnerDeleteUnableToDeleteAlertView() { tpbShowAlert(withTitle: LocalizedString(key: editOrganizationVMSUnabletoDeleteOrganizationTip), buttonText: LocalizedString(key: commonKnown)) } private func clickUserManagement() { if userManagementPermission { let vc = UserManagementViewController() navigationController?.pushViewController(vc, animated: true) } else { tpbShowAlertToastFailure(withTitle: TPSSAppContext.errorMsg(forLocalizedKey: IPC_EM_FINISH_REASON_PERMISSION_DENIED)) } } // MARK: - 发送请求 private func requestRolePermission(){ //适配多端修改问题 if currentOrgIsGuardOnly { appContext.requestCloudVMSRolePermission() } else { appContext.requestCentralRolePermission() } } private func getOrgDetailInfo() { RequestAsyncHandler.default.perform({ () -> TPSSCode in // 获取组织信息 #if APP_VIGI return appContext.requestVMSOrgInfo() #else return appContext.requestGetCloudOmadaOrgDetails(byVmsId: orgInfo.vmsId) #endif }, callback: { [weak self] result in guard let self = self else { return } if result.isSuccess() { let currentVMSId = appContext.currentVMSId if let org = self.appContext.getVmsInfo(byVmsId: currentVMSId) { // 为保证浅拷贝有效,此处仅修改值,而非直接用底层获取的值替换 self.orgInfo.vmsName = org.vmsName self.orgInfo.region = org.region self.orgInfo.countryCode = org.countryCode self.orgInfo.timeZone = org.timeZone self.orgInfo.networkVersion = org.networkVersion self.orgInfo.surveillanceVersion = org.surveillanceVersion self.reloadView() } else { self.orgInfo.vmsName = appContext.accountInfo.currentVmsName self.orgInfo.timeZone = appContext.accountInfo.currentVmsTimeZone self.orgInfo.countryCode = appContext.accountInfo.currentVmsCountryCode } } else { var errorMessage = result.error() if result.notification?.parameter0() == TPSS_EC_TIMEOUT || result.notification?.parameter0() == TPSS_EC_NET_ERROR || errorMessage == "" { errorMessage = LocalizedString(key: deviceSettingWifiNvrGetStatusFail) } ToastView.showTopWarningWithLeftIconToast(title: errorMessage, type: .info, existTime: 2) } }) } private func deleteOrg() { RequestAsyncHandler.default.perform({ () -> TPSSCode in ToastView.showLoadingToast(cirleWithMessage: nil) #if APP_VIGI return appContext.requestDeleteVms(byPassword: "") #else return appContext.requestForgetCloudOmadaOrg(byVmsId: orgInfo.vmsId) #endif }, callback: { [weak self] result in guard let self = self else { return } ToastView.dismissLoadingToast() if result.isSuccess() { self.appContext.initVms(.personal, organizationId: "") // 清空上一次选择的站点信息 UserDefaults.standard.deviceListSelectedType = nil FileManager.default.removeVmsSelectedSiteInfo() #if APP_VIGI // vigi需要在组织列表为空时跳帐密页,也需要登出 let getVMSOrganizationList = self.appContext.getVMSOrganizationList() let orgList = getVMSOrganizationList.filter { [weak self] info in guard let self2 = self else { return false } return info.vmsId != self2.orgInfo.vmsId } let unverifyOrgList = self.appContext.getUnverifyVMSOrganizationList() if orgList.isEmpty && unverifyOrgList.isEmpty { self.jumpToAccountLoginVC() } else { self.jumpToSelectOrganizationVC() } #else self.jumpToSelectOrganizationVC() #endif } else { var errorMessage = result.error() if result.notification?.parameter0() == TPSS_EC_TIMEOUT || result.notification?.parameter0() == TPSS_EC_NET_ERROR || errorMessage == "" { errorMessage = LocalizedString(key: deviceSettingWifiNvrGetStatusFail) } ToastView.showTopWarningWithLeftIconToast(title: errorMessage, type: .info, existTime: 2) } }) } private func jumpToSelectOrganizationVC() { let vc = AccountSelectOrganizationViewController() vc.getOrgStatusTypeFromLogin = .success vc.jumpToLastViewHandle = { //静默登出 TPAppContextFactory.shared().requestLogout() if(UIApplication.shared.isRegisteredForRemoteNotifications){ UIApplication.shared.unregisterForRemoteNotifications() } } UIApplication.shared.keyWindow?.rootViewController = BaseNavigationController(rootViewController: vc) } private func jumpToAccountLoginVC() { let destinationStoryboard = UIStoryboard(name: "AccountManager", bundle: nil) if let destinationViewController = destinationStoryboard.instantiateViewController(withIdentifier: "accountManagerLogin") as? AccountLoginViewController { //静默登出 TPAppContextFactory.shared().requestLogout() if(UIApplication.shared.isRegisteredForRemoteNotifications){ UIApplication.shared.unregisterForRemoteNotifications() } let navigationVC = BaseNavigationController(rootViewController: destinationViewController) UIApplication.shared.keyWindow?.rootViewController = navigationVC } } } // 自定义view fileprivate class EditOrganizationOrgHeaderView: TPBBaseView { // MARK: - 属性 private lazy var nameLabel: UILabel = { let label = UILabel() label.text = " " label.numberOfLines = 1 label.font = .tpr22Regular() label.textColor = .tpbTextPrimary label.textAlignment = .left label.lineBreakMode = .byTruncatingTail return label }() private lazy var orgCategoryContainerView: UIView = { let view = UIView() view.backgroundColor = .tpbPrimaryLight view.layer.cornerRadius = 3 view.layer.masksToBounds = true view.setContentHuggingPriority(.required, for: .horizontal) view.setContentCompressionResistancePriority(.required, for: .horizontal) return view }() private lazy var orgCategorylabel: UILabel = { let label = UILabel() label.text = LocalizedString(key: orgCategoryEssential) label.font = .tpr12Regular() label.textColor = .tpbPrimary label.numberOfLines = 1 label.lineBreakMode = .byTruncatingTail label.textAlignment = .left label.setContentHuggingPriority(.required, for: .horizontal) label.setContentCompressionResistancePriority(.required, for: .horizontal) return label }() // MARK: - View override func setupSubviews() { super.setupSubviews() backgroundColor = .tpbBackground addSubviews([nameLabel, orgCategoryContainerView]) orgCategoryContainerView.addSubview(orgCategorylabel) #if APP_VIGI orgCategoryContainerView.isHidden = true #endif } override func makeConstraint() { super.makeConstraint() nameLabel.snp.makeConstraints { make in make.leading.equalToSuperview().offset(20) make.top.equalToSuperview().offset(20) if orgCategoryContainerView.isHidden { make.trailing.equalToSuperview().offset(-20) } make.bottom.equalToSuperview().offset(-10) } orgCategoryContainerView.snp.makeConstraints { make in make.leading.equalTo(nameLabel.snp.trailing).offset(8) make.trailing.lessThanOrEqualToSuperview().offset(-20) make.centerY.equalTo(nameLabel) } orgCategorylabel.snp.makeConstraints { make in make.leading.equalToSuperview().offset(6) make.trailing.equalToSuperview().offset(-6) make.top.equalToSuperview().offset(2) make.bottom.equalToSuperview().offset(-2) } } // MARK: -更新函数 func updateWith(orgName: String, orgCategory: TPSSVMSOrgCategory, clickGeneralSettingsBlock: @escaping (() -> Void)) { nameLabel.text = orgName if orgCategory == .essential { orgCategorylabel.text = LocalizedString(key: orgCategoryEssential) orgCategorylabel.textColor = .tpbBlue orgCategoryContainerView.backgroundColor = .tpbBlueLight } else { orgCategorylabel.text = LocalizedString(key: deviceSettingSmartWhiteLampModeAutoWTL) orgCategorylabel.textColor = .tpbPrimary orgCategoryContainerView.backgroundColor = .tpbPrimaryLight } #if APP_VIGI #endif } }
最新发布
12-03
[root@node1 ~]# docker logs 85e71835d204 Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I0730 09:55:21.321773 1 server.go:632] external host was not specified, using 192.168.229.145 I0730 09:55:21.322237 1 server.go:182] Version: v1.20.9 I0730 09:55:21.547240 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:21.547259 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:21.548307 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:21.548318 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:21.550423 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:21.550477 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:21.550717 1 shared_informer.go:240] Waiting for caches to sync for node_authorizer I0730 09:55:22.138571 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.138611 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.144633 1 client.go:360] parsed scheme: "passthrough" I0730 09:55:22.144800 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:55:22.144828 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:55:22.146169 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.146192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.262294 1 instance.go:289] Using reconciler: lease I0730 09:55:22.262855 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.262883 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.275807 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.275869 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.283173 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.283213 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.290321 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.290362 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.296709 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.296788 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.301871 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.301902 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.307806 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.307838 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.313365 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.313392 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.319529 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.319557 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.324517 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.324536 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.331886 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.331925 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.339523 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.339587 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.346920 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.347079 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.352814 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.352836 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.358384 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.358443 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.364859 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.364886 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.370633 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.370680 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.376711 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.376734 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.382656 1 rest.go:131] the default service ipfamily for this cluster is: IPv4 I0730 09:55:22.504261 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.504291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.528497 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.528568 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.538368 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.538453 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.547338 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.547389 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.556834 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.556861 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.563972 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.564021 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.565476 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.565530 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.575236 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.575291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.602757 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.602821 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.611140 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.611192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.619221 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.619285 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.626667 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.626695 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.634819 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.634874 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.642543 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.642572 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.652458 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.652487 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.659228 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.659267 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.665840 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.665891 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.673386 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.673458 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.680779 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.680809 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.688424 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.688479 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.697923 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.697969 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.705098 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.705130 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.716766 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.716819 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.724379 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.724441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.731383 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.731449 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.737740 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.737796 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.744905 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.744961 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.756486 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.756510 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.763029 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.763074 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.774316 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.774343 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.781013 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.781063 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.787849 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.787898 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.799124 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.799153 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.808795 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.808849 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.837930 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.837964 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.851372 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.851441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.860141 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.860194 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.880323 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.880353 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.896262 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.896292 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.903896 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.903923 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.912570 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.912598 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.945966 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.946025 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.957156 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.957337 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.966709 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.966752 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.975373 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.975469 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.985516 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.985566 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.994780 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.994842 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.002474 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.002502 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.011694 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.011729 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.019030 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.019082 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.026717 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.026769 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.034206 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.034233 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] W0730 09:55:23.272674 1 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources. W0730 09:55:23.278301 1 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.286644 1 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.298757 1 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.301603 1 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.305468 1 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.307443 1 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.311186 1 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources. W0730 09:55:23.311215 1 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources. I0730 09:55:23.324600 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:23.324655 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:23.326518 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.326547 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.333081 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.333098 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:25.088013 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0730 09:55:25.088058 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:25.088324 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key I0730 09:55:25.088418 1 secure_serving.go:197] Serving securely on [::]:6443 I0730 09:55:25.088495 1 autoregister_controller.go:141] Starting autoregister controller I0730 09:55:25.088526 1 cache.go:32] Waiting for caches to sync for autoregister controller I0730 09:55:25.088557 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key I0730 09:55:25.088560 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0730 09:55:25.088615 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0730 09:55:25.088619 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0730 09:55:25.088714 1 available_controller.go:475] Starting AvailableConditionController I0730 09:55:25.088718 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0730 09:55:25.088746 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0730 09:55:25.088752 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0730 09:55:25.089052 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0730 09:55:25.090085 1 controller.go:83] Starting OpenAPI AggregationController I0730 09:55:25.090661 1 apf_controller.go:261] Starting API Priority and Fairness config controller I0730 09:55:25.095434 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0730 09:55:25.095446 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0730 09:55:25.095515 1 controller.go:86] Starting OpenAPI controller I0730 09:55:25.095531 1 naming_controller.go:291] Starting NamingConditionController I0730 09:55:25.095565 1 establishing_controller.go:76] Starting EstablishingController I0730 09:55:25.095579 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0730 09:55:25.095593 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0730 09:55:25.095604 1 crd_finalizer.go:266] Starting CRDFinalizer I0730 09:55:25.097471 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:25.097547 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt E0730 09:55:25.100371 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.229.145, ResourceVersion: 0, AdditionalErrorMsg: I0730 09:55:25.150947 1 shared_informer.go:247] Caches are synced for node_authorizer I0730 09:55:25.198738 1 apf_controller.go:266] Running API Priority and Fairness config worker I0730 09:55:25.198911 1 cache.go:39] Caches are synced for autoregister controller I0730 09:55:25.199324 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0730 09:55:25.199340 1 cache.go:39] Caches are synced for AvailableConditionController controller I0730 09:55:25.199353 1 shared_informer.go:247] Caches are synced for crd-autoregister I0730 09:55:25.199442 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0730 09:55:25.226411 1 controller.go:609] quota admission added evaluator for: namespaces I0730 09:55:26.087754 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0730 09:55:26.087813 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0730 09:55:26.099842 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0730 09:55:26.103129 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0730 09:55:26.103147 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0730 09:55:26.490805 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0730 09:55:26.524236 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0730 09:55:26.624643 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.229.145] I0730 09:55:26.625375 1 controller.go:609] quota admission added evaluator for: endpoints I0730 09:55:26.628224 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io I0730 09:55:28.611886 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io I0730 09:55:29.087009 1 controller.go:609] quota admission added evaluator for: serviceaccounts I0730 09:55:56.812958 1 client.go:360] parsed scheme: "passthrough" I0730 09:55:56.813001 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:55:56.813008 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:56:27.580990 1 client.go:360] parsed scheme: "passthrough" I0730 09:56:27.581052 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:56:27.581064 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:56:58.136069 1 client.go:360] parsed scheme: "passthrough" I0730 09:56:58.136122 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:56:58.136134 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:57:33.643030 1 client.go:360] parsed scheme: "passthrough" I0730 09:57:33.643108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:57:33.643119 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:58:15.787556 1 client.go:360] parsed scheme: "passthrough" I0730 09:58:15.787626 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:58:15.787637 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:58:46.620841 1 client.go:360] parsed scheme: "passthrough" I0730 09:58:46.620879 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:58:46.620884 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:59:28.044387 1 client.go:360] parsed scheme: "passthrough" I0730 09:59:28.044446 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:59:28.044453 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:00:12.134065 1 client.go:360] parsed scheme: "passthrough" I0730 10:00:12.134096 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:00:12.134102 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:00:52.210433 1 client.go:360] parsed scheme: "passthrough" I0730 10:00:52.210482 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:00:52.210491 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:01:33.985297 1 client.go:360] parsed scheme: "passthrough" I0730 10:01:33.985351 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:01:33.985369 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:02:17.490682 1 client.go:360] parsed scheme: "passthrough" I0730 10:02:17.490719 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:02:17.490725 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:02:53.017984 1 client.go:360] parsed scheme: "passthrough" I0730 10:02:53.018108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:02:53.018138 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:03:31.267486 1 client.go:360] parsed scheme: "passthrough" I0730 10:03:31.267537 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:03:31.267543 1 clientconn.go:948] ClientConn switching balancer to "pick_first" [root@node1 ~]# [root@node1 ~]# docker logs 5fb4dcb49e9a [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2025-07-30 09:55:21.067915 I | etcdmain: etcd Version: 3.4.13 2025-07-30 09:55:21.067944 I | etcdmain: Git SHA: ae9734ed2 2025-07-30 09:55:21.067948 I | etcdmain: Go Version: go1.12.17 2025-07-30 09:55:21.067950 I | etcdmain: Go OS/Arch: linux/amd64 2025-07-30 09:55:21.067953 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2025-07-30 09:55:21.068013 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 2025-07-30 09:55:21.069505 I | embed: name = node1 2025-07-30 09:55:21.069518 I | embed: data dir = /var/lib/etcd 2025-07-30 09:55:21.069522 I | embed: member dir = /var/lib/etcd/member 2025-07-30 09:55:21.069525 I | embed: heartbeat = 100ms 2025-07-30 09:55:21.069528 I | embed: election = 1000ms 2025-07-30 09:55:21.069530 I | embed: snapshot count = 10000 2025-07-30 09:55:21.069555 I | embed: advertise client URLs = https://192.168.229.145:2379 2025-07-30 09:55:21.086105 I | etcdserver: starting member 25a5889883ba023f in cluster dbb04bfad86388d6 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=() raft2025/07/30 09:55:21 INFO: 25a5889883ba023f became follower at term 0 raft2025/07/30 09:55:21 INFO: newRaft 25a5889883ba023f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2025/07/30 09:55:21 INFO: 25a5889883ba023f became follower at term 1 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=(2712724539187003967) 2025-07-30 09:55:21.112113 W | auth: simple token is not cryptographically signed 2025-07-30 09:55:21.115844 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2025-07-30 09:55:21.123343 I | etcdserver: 25a5889883ba023f as single-node; fast-forwarding 9 ticks (election ticks 10) 2025-07-30 09:55:21.124008 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 2025-07-30 09:55:21.124333 I | embed: listening for metrics on http://127.0.0.1:2381 2025-07-30 09:55:21.124737 I | embed: listening for peers on 192.168.229.145:2380 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=(2712724539187003967) 2025-07-30 09:55:21.125960 I | etcdserver/membership: added member 25a5889883ba023f [https://192.168.229.145:2380] to cluster dbb04bfad86388d6 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f is starting a new election at term 1 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f became candidate at term 2 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f received MsgVoteResp from 25a5889883ba023f at term 2 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f became leader at term 2 raft2025/07/30 09:55:22 INFO: raft.node: 25a5889883ba023f elected leader 25a5889883ba023f at term 2 2025-07-30 09:55:22.110626 I | etcdserver: setting up the initial cluster version to 3.4 2025-07-30 09:55:22.111512 N | etcdserver/membership: set the initial cluster version to 3.4 2025-07-30 09:55:22.111566 I | etcdserver/api: enabled capabilities for version 3.4 2025-07-30 09:55:22.111617 I | etcdserver: published {Name:node1 ClientURLs:[https://192.168.229.145:2379]} to cluster dbb04bfad86388d6 2025-07-30 09:55:22.111706 I | embed: ready to serve client requests 2025-07-30 09:55:22.111722 I | embed: ready to serve client requests 2025-07-30 09:55:22.113722 I | embed: serving client requests on 192.168.229.145:2379 2025-07-30 09:55:22.113948 I | embed: serving client requests on 127.0.0.1:2379 2025-07-30 09:55:37.917153 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:55:41.816410 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:55:51.816930 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:01.816886 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:11.817562 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:21.817717 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:31.816595 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:41.816625 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:51.817154 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:01.817686 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:11.816756 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:21.817343 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:31.817003 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:41.816873 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:51.817207 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:01.816804 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:11.817017 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:21.816642 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:31.816838 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:41.816347 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:51.816459 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:01.817140 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:11.816554 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:21.816869 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:31.818716 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:41.816705 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:51.817851 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:01.816829 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:11.816569 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:21.816748 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:31.817200 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:41.816544 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:51.816793 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:01.817000 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:11.816522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:21.816775 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:31.816684 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:41.816727 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:51.817073 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:01.817473 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:11.816961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:21.817283 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:31.816549 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:41.817031 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:51.816960 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:01.816855 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:11.816837 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:21.816528 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:31.816693 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:41.816522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:51.817027 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:04:01.816739 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:04:11.816916 I | etcdserver/api/etcdhttp: /health OK (status code 200) [root@node1 ~]# docker logs ff9b8d7587aa Flag --port has been deprecated, see --secure-port instead. I0730 09:55:21.608626 1 serving.go:331] Generated self-signed cert in-memory I0730 09:55:21.958575 1 controllermanager.go:176] Version: v1.20.9 I0730 09:55:21.960121 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0730 09:55:21.960150 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:21.961003 1 secure_serving.go:197] Serving securely on 127.0.0.1:10257 I0730 09:55:21.961148 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0730 09:55:21.961191 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... E0730 09:55:25.271255 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system" I0730 09:55:28.613626 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager I0730 09:55:28.613892 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="node1_503ebc03-4f42-47e4-9c9d-e9064b26d78b became leader" I0730 09:55:29.083585 1 shared_informer.go:240] Waiting for caches to sync for tokens I0730 09:55:29.183810 1 shared_informer.go:247] Caches are synced for tokens I0730 09:55:29.540299 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges I0730 09:55:29.540448 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io I0730 09:55:29.540486 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch I0730 09:55:29.540502 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0730 09:55:29.540561 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0730 09:55:29.540575 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints I0730 09:55:29.540626 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps I0730 09:55:29.540672 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps I0730 09:55:29.540690 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0730 09:55:29.540706 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io W0730 09:55:29.540717 1 shared_informer.go:494] resyncPeriod 20h21m59.062252565s is smaller than resyncCheckPeriod 22h46m52.416059627s and the informer has already started. Changing it to 22h46m52.416059627s I0730 09:55:29.540798 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I0730 09:55:29.540832 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps I0730 09:55:29.540845 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch W0730 09:55:29.540850 1 shared_informer.go:494] resyncPeriod 13h23m0.496914786s is smaller than resyncCheckPeriod 22h46m52.416059627s and the informer has already started. Changing it to 22h46m52.416059627s I0730 09:55:29.540864 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts I0730 09:55:29.540876 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps I0730 09:55:29.540886 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps I0730 09:55:29.540898 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I0730 09:55:29.540909 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0730 09:55:29.540920 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates I0730 09:55:29.540980 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0730 09:55:29.541016 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions I0730 09:55:29.541024 1 controllermanager.go:554] Started "resourcequota" I0730 09:55:29.541243 1 resource_quota_controller.go:273] Starting resource quota controller I0730 09:55:29.541254 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0730 09:55:29.541267 1 resource_quota_monitor.go:304] QuotaMonitor running I0730 09:55:29.546774 1 node_lifecycle_controller.go:77] Sending events to api server E0730 09:55:29.546816 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided W0730 09:55:29.546824 1 controllermanager.go:546] Skipping "cloud-node-lifecycle" W0730 09:55:29.546837 1 controllermanager.go:546] Skipping "ttl-after-finished" W0730 09:55:29.546843 1 controllermanager.go:546] Skipping "ephemeral-volume" I0730 09:55:29.553569 1 controllermanager.go:554] Started "garbagecollector" I0730 09:55:29.553994 1 garbagecollector.go:142] Starting garbage collector controller I0730 09:55:29.554019 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0730 09:55:29.554078 1 graph_builder.go:289] GraphBuilder running I0730 09:55:29.559727 1 controllermanager.go:554] Started "statefulset" I0730 09:55:29.559830 1 stateful_set.go:146] Starting stateful set controller I0730 09:55:29.559837 1 shared_informer.go:240] Waiting for caches to sync for stateful set I0730 09:55:29.570038 1 controllermanager.go:554] Started "bootstrapsigner" I0730 09:55:29.570166 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer E0730 09:55:29.577020 1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0730 09:55:29.577049 1 controllermanager.go:546] Skipping "service" I0730 09:55:29.583964 1 controllermanager.go:554] Started "clusterrole-aggregation" I0730 09:55:29.584128 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I0730 09:55:29.584145 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I0730 09:55:29.590189 1 controllermanager.go:554] Started "podgc" I0730 09:55:29.590277 1 gc_controller.go:89] Starting GC controller I0730 09:55:29.590285 1 shared_informer.go:240] Waiting for caches to sync for GC I0730 09:55:29.611665 1 controllermanager.go:554] Started "namespace" I0730 09:55:29.611840 1 namespace_controller.go:200] Starting namespace controller I0730 09:55:29.611851 1 shared_informer.go:240] Waiting for caches to sync for namespace I0730 09:55:29.644607 1 controllermanager.go:554] Started "replicaset" I0730 09:55:29.644643 1 replica_set.go:182] Starting replicaset controller I0730 09:55:29.644648 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet I0730 09:55:29.794545 1 controllermanager.go:554] Started "pvc-protection" I0730 09:55:29.794636 1 pvc_protection_controller.go:110] Starting PVC protection controller I0730 09:55:29.794643 1 shared_informer.go:240] Waiting for caches to sync for PVC protection I0730 09:55:29.947340 1 controllermanager.go:554] Started "endpointslicemirroring" I0730 09:55:29.947470 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I0730 09:55:29.947481 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I0730 09:55:30.096177 1 controllermanager.go:554] Started "deployment" I0730 09:55:30.096245 1 deployment_controller.go:153] Starting deployment controller I0730 09:55:30.096252 1 shared_informer.go:240] Waiting for caches to sync for deployment I0730 09:55:30.143147 1 node_ipam_controller.go:91] Sending events to api server. I0730 09:55:30.689188 1 request.go:655] Throttling request took 1.048578443s, request: GET:https://192.168.229.145:6443/apis/storage.k8s.io/v1?timeout=32s I0730 09:55:40.164520 1 range_allocator.go:82] Sending events to api server. I0730 09:55:40.164715 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I0730 09:55:40.164875 1 controllermanager.go:554] Started "nodeipam" I0730 09:55:40.164991 1 node_ipam_controller.go:159] Starting ipam controller I0730 09:55:40.164996 1 shared_informer.go:240] Waiting for caches to sync for node I0730 09:55:40.170277 1 controllermanager.go:554] Started "serviceaccount" I0730 09:55:40.170396 1 serviceaccounts_controller.go:117] Starting service account controller I0730 09:55:40.170413 1 shared_informer.go:240] Waiting for caches to sync for service account I0730 09:55:40.187598 1 controllermanager.go:554] Started "horizontalpodautoscaling" I0730 09:55:40.187715 1 horizontal.go:169] Starting HPA controller I0730 09:55:40.187729 1 shared_informer.go:240] Waiting for caches to sync for HPA I0730 09:55:40.193364 1 controllermanager.go:554] Started "ttl" I0730 09:55:40.193485 1 ttl_controller.go:121] Starting TTL controller I0730 09:55:40.193491 1 shared_informer.go:240] Waiting for caches to sync for TTL I0730 09:55:40.194930 1 node_lifecycle_controller.go:380] Sending events to api server. I0730 09:55:40.195077 1 taint_manager.go:163] Sending events to api server. I0730 09:55:40.195125 1 node_lifecycle_controller.go:508] Controller will reconcile labels. I0730 09:55:40.195140 1 controllermanager.go:554] Started "nodelifecycle" W0730 09:55:40.195149 1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. W0730 09:55:40.195152 1 controllermanager.go:546] Skipping "route" I0730 09:55:40.195358 1 node_lifecycle_controller.go:542] Starting node controller I0730 09:55:40.195365 1 shared_informer.go:240] Waiting for caches to sync for taint I0730 09:55:40.204190 1 controllermanager.go:554] Started "root-ca-cert-publisher" I0730 09:55:40.204295 1 publisher.go:98] Starting root CA certificate configmap publisher I0730 09:55:40.204302 1 shared_informer.go:240] Waiting for caches to sync for crt configmap I0730 09:55:40.213208 1 controllermanager.go:554] Started "endpointslice" I0730 09:55:40.213325 1 endpointslice_controller.go:237] Starting endpoint slice controller I0730 09:55:40.213331 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I0730 09:55:40.221793 1 controllermanager.go:554] Started "replicationcontroller" I0730 09:55:40.221900 1 replica_set.go:182] Starting replicationcontroller controller I0730 09:55:40.221906 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController I0730 09:55:40.233083 1 controllermanager.go:554] Started "disruption" I0730 09:55:40.233218 1 disruption.go:331] Starting disruption controller I0730 09:55:40.233225 1 shared_informer.go:240] Waiting for caches to sync for disruption I0730 09:55:40.243452 1 controllermanager.go:554] Started "cronjob" I0730 09:55:40.243529 1 cronjob_controller.go:96] Starting CronJob Manager I0730 09:55:40.454779 1 controllermanager.go:554] Started "csrcleaner" I0730 09:55:40.454818 1 cleaner.go:82] Starting CSR cleaner controller I0730 09:55:40.605277 1 controllermanager.go:554] Started "persistentvolume-binder" I0730 09:55:40.605328 1 pv_controller_base.go:307] Starting persistent volume controller I0730 09:55:40.605334 1 shared_informer.go:240] Waiting for caches to sync for persistent volume I0730 09:55:40.754979 1 controllermanager.go:554] Started "pv-protection" I0730 09:55:40.755043 1 pv_protection_controller.go:83] Starting PV protection controller I0730 09:55:40.755055 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0730 09:55:40.904412 1 controllermanager.go:554] Started "endpoint" I0730 09:55:40.904457 1 endpoints_controller.go:184] Starting endpoint controller I0730 09:55:40.904491 1 shared_informer.go:240] Waiting for caches to sync for endpoint I0730 09:55:41.056168 1 controllermanager.go:554] Started "daemonset" I0730 09:55:41.056220 1 daemon_controller.go:285] Starting daemon sets controller I0730 09:55:41.056226 1 shared_informer.go:240] Waiting for caches to sync for daemon sets I0730 09:55:41.103979 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I0730 09:55:41.103997 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0730 09:55:41.104013 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104264 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I0730 09:55:41.104271 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0730 09:55:41.104299 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104766 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I0730 09:55:41.104772 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0730 09:55:41.104780 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104874 1 controllermanager.go:554] Started "csrsigning" I0730 09:55:41.104912 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I0730 09:55:41.104917 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0730 09:55:41.104937 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.255923 1 controllermanager.go:554] Started "tokencleaner" I0730 09:55:41.255988 1 tokencleaner.go:118] Starting token cleaner controller I0730 09:55:41.255993 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner I0730 09:55:41.255997 1 shared_informer.go:247] Caches are synced for token_cleaner I0730 09:55:41.405554 1 controllermanager.go:554] Started "attachdetach" I0730 09:55:41.405622 1 attach_detach_controller.go:329] Starting attach detach controller I0730 09:55:41.405628 1 shared_informer.go:240] Waiting for caches to sync for attach detach I0730 09:55:41.554527 1 controllermanager.go:554] Started "persistentvolume-expander" I0730 09:55:41.554606 1 expand_controller.go:310] Starting expand controller I0730 09:55:41.554612 1 shared_informer.go:240] Waiting for caches to sync for expand I0730 09:55:41.705883 1 controllermanager.go:554] Started "job" I0730 09:55:41.705932 1 job_controller.go:148] Starting job controller I0730 09:55:41.705938 1 shared_informer.go:240] Waiting for caches to sync for job I0730 09:55:41.754446 1 controllermanager.go:554] Started "csrapproving" I0730 09:55:41.754722 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0730 09:55:41.755111 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I0730 09:55:41.755119 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I0730 09:55:41.774691 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0730 09:55:41.784409 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0730 09:55:41.793636 1 shared_informer.go:247] Caches are synced for TTL I0730 09:55:41.804153 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0730 09:55:41.804368 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0730 09:55:41.804368 1 shared_informer.go:247] Caches are synced for crt configmap I0730 09:55:41.804860 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0730 09:55:41.805048 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0730 09:55:41.811971 1 shared_informer.go:247] Caches are synced for namespace I0730 09:55:41.847535 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0730 09:55:41.855186 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0730 09:55:41.865229 1 shared_informer.go:247] Caches are synced for node I0730 09:55:41.865293 1 range_allocator.go:172] Starting range CIDR allocator I0730 09:55:41.865298 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0730 09:55:41.865301 1 shared_informer.go:247] Caches are synced for cidrallocator I0730 09:55:41.870566 1 shared_informer.go:247] Caches are synced for service account E0730 09:55:41.877240 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0730 09:55:41.954718 1 shared_informer.go:247] Caches are synced for expand I0730 09:55:41.955116 1 shared_informer.go:247] Caches are synced for PV protection I0730 09:55:42.013590 1 shared_informer.go:247] Caches are synced for endpoint_slice I0730 09:55:42.022213 1 shared_informer.go:247] Caches are synced for ReplicationController I0730 09:55:42.033328 1 shared_informer.go:247] Caches are synced for disruption I0730 09:55:42.033350 1 disruption.go:339] Sending events to api server. I0730 09:55:42.041423 1 shared_informer.go:247] Caches are synced for resource quota I0730 09:55:42.045150 1 shared_informer.go:247] Caches are synced for ReplicaSet I0730 09:55:42.054792 1 shared_informer.go:247] Caches are synced for resource quota I0730 09:55:42.056647 1 shared_informer.go:247] Caches are synced for daemon sets I0730 09:55:42.059966 1 shared_informer.go:247] Caches are synced for stateful set I0730 09:55:42.088174 1 shared_informer.go:247] Caches are synced for HPA I0730 09:55:42.090312 1 shared_informer.go:247] Caches are synced for GC I0730 09:55:42.094795 1 shared_informer.go:247] Caches are synced for PVC protection I0730 09:55:42.095608 1 shared_informer.go:247] Caches are synced for taint I0730 09:55:42.095970 1 taint_manager.go:187] Starting NoExecuteTaintManager I0730 09:55:42.096344 1 shared_informer.go:247] Caches are synced for deployment I0730 09:55:42.104584 1 shared_informer.go:247] Caches are synced for endpoint I0730 09:55:42.105378 1 shared_informer.go:247] Caches are synced for persistent volume I0730 09:55:42.105717 1 shared_informer.go:247] Caches are synced for attach detach I0730 09:55:42.105975 1 shared_informer.go:247] Caches are synced for job I0730 09:55:42.215619 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0730 09:55:42.516093 1 shared_informer.go:247] Caches are synced for garbage collector I0730 09:55:42.554168 1 shared_informer.go:247] Caches are synced for garbage collector I0730 09:55:42.554210 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage [root@node1 ~]# docker logs 3112cf504b56 I0730 09:55:21.400259 1 serving.go:331] Generated self-signed cert in-memory W0730 09:55:25.261574 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0730 09:55:25.261664 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0730 09:55:25.261682 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous. W0730 09:55:25.261687 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0730 09:55:25.292394 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0730 09:55:25.292498 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:25.292528 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:25.293029 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0730 09:55:25.295710 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0730 09:55:25.297091 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0730 09:55:25.297190 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0730 09:55:25.297221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0730 09:55:25.297272 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0730 09:55:25.297376 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0730 09:55:25.297420 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0730 09:55:25.297718 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0730 09:55:25.297832 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0730 09:55:25.297872 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0730 09:55:25.297950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0730 09:55:25.299623 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0730 09:55:26.134757 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0730 09:55:26.376729 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0730 09:55:26.380160 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0730 09:55:26.892586 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:28.692876 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler... I0730 09:55:28.699723 1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler 这是我的日志帮我看看有什么问题
07-31
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值