Sep 4 17:35:22.917012 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:35:22.917034 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:35:22.917045 kernel: KASLR enabled Sep 4 17:35:22.917051 kernel: efi: EFI v2.7 by EDK II Sep 4 17:35:22.917077 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:35:22.917084 kernel: random: crng init done Sep 4 17:35:22.917091 kernel: ACPI: Early table checksum verification disabled Sep 4 17:35:22.917097 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:35:22.917104 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:35:22.917112 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917118 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917124 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917130 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917137 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917144 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917152 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917159 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917166 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:35:22.917172 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:35:22.917178 kernel: NUMA: Failed to initialise from firmware Sep 4 17:35:22.917185 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:35:22.917191 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 4 17:35:22.917198 kernel: Zone ranges: Sep 4 17:35:22.917204 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:35:22.917211 kernel: DMA32 empty Sep 4 17:35:22.917218 kernel: Normal empty Sep 4 17:35:22.917225 kernel: Movable zone start for each node Sep 4 17:35:22.917231 kernel: Early memory node ranges Sep 4 17:35:22.917237 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:35:22.917244 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:35:22.917250 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:35:22.917256 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:35:22.917263 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:35:22.917269 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:35:22.917276 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:35:22.917282 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:35:22.917288 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:35:22.917296 kernel: psci: probing for conduit method from ACPI. Sep 4 17:35:22.917303 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:35:22.917309 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:35:22.917318 kernel: psci: Trusted OS migration not required Sep 4 17:35:22.917325 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:35:22.917332 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:35:22.917341 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:35:22.917347 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:35:22.917354 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:35:22.917361 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:35:22.917368 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:35:22.917375 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:35:22.917382 kernel: CPU features: detected: Spectre-v4 Sep 4 17:35:22.917388 kernel: CPU features: detected: Spectre-BHB Sep 4 17:35:22.917395 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:35:22.917402 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:35:22.917411 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:35:22.917417 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:35:22.917424 kernel: alternatives: applying boot alternatives Sep 4 17:35:22.917432 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:35:22.917439 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:35:22.917446 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:35:22.917453 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:35:22.917460 kernel: Fallback order for Node 0: 0 Sep 4 17:35:22.917467 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:35:22.917473 kernel: Policy zone: DMA Sep 4 17:35:22.917480 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:35:22.917488 kernel: software IO TLB: area num 4. Sep 4 17:35:22.917495 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:35:22.917502 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Sep 4 17:35:22.917509 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:35:22.917516 kernel: trace event string verifier disabled Sep 4 17:35:22.917523 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:35:22.917530 kernel: rcu: RCU event tracing is enabled. Sep 4 17:35:22.917537 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:35:22.917544 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:35:22.917551 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:35:22.917558 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:35:22.917565 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:35:22.917573 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:35:22.917580 kernel: GICv3: 256 SPIs implemented Sep 4 17:35:22.917586 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:35:22.917593 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:35:22.917600 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:35:22.917607 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:35:22.917613 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:35:22.917620 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:35:22.917627 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:35:22.917634 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:35:22.917641 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:35:22.917649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:35:22.917656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:35:22.917663 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:35:22.917670 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:35:22.917677 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:35:22.917684 kernel: arm-pv: using stolen time PV Sep 4 17:35:22.917691 kernel: Console: colour dummy device 80x25 Sep 4 17:35:22.917705 kernel: ACPI: Core revision 20230628 Sep 4 17:35:22.917714 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:35:22.917722 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:35:22.917731 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:35:22.917738 kernel: SELinux: Initializing. Sep 4 17:35:22.917745 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:35:22.917752 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:35:22.917759 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:35:22.917766 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:35:22.917773 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:35:22.917780 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:35:22.917787 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:35:22.917804 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:35:22.917812 kernel: Remapping and enabling EFI services. Sep 4 17:35:22.917819 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:35:22.917826 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:35:22.917833 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:35:22.917840 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:35:22.917847 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:35:22.917854 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:35:22.917861 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:35:22.917868 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:35:22.917877 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:35:22.917884 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:35:22.917896 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:35:22.917906 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:35:22.917913 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:35:22.917921 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:35:22.917928 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:35:22.917935 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:35:22.917943 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:35:22.917952 kernel: SMP: Total of 4 processors activated. Sep 4 17:35:22.917959 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:35:22.917966 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:35:22.917974 kernel: CPU features: detected: Common not Private translations Sep 4 17:35:22.917981 kernel: CPU features: detected: CRC32 instructions Sep 4 17:35:22.917989 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:35:22.917996 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:35:22.918004 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:35:22.918012 kernel: CPU features: detected: Privileged Access Never Sep 4 17:35:22.918020 kernel: CPU features: detected: RAS Extension Support Sep 4 17:35:22.918027 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:35:22.918035 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:35:22.918043 kernel: alternatives: applying system-wide alternatives Sep 4 17:35:22.918050 kernel: devtmpfs: initialized Sep 4 17:35:22.918058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:35:22.918065 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:35:22.918073 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:35:22.918082 kernel: SMBIOS 3.0.0 present. Sep 4 17:35:22.918089 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:35:22.918097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:35:22.918105 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:35:22.918112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:35:22.918120 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:35:22.918127 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:35:22.918135 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 4 17:35:22.918142 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:35:22.918151 kernel: cpuidle: using governor menu Sep 4 17:35:22.918159 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:35:22.918166 kernel: ASID allocator initialised with 32768 entries Sep 4 17:35:22.918174 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:35:22.918181 kernel: Serial: AMBA PL011 UART driver Sep 4 17:35:22.918188 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:35:22.918196 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:35:22.918203 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:35:22.918210 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:35:22.918219 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:35:22.918227 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:35:22.918234 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:35:22.918242 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:35:22.918249 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:35:22.918256 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:35:22.918264 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:35:22.918271 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:35:22.918278 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:35:22.918287 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:35:22.918295 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:35:22.918302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:35:22.918310 kernel: ACPI: Interpreter enabled Sep 4 17:35:22.918317 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:35:22.918324 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:35:22.918332 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:35:22.918339 kernel: printk: console [ttyAMA0] enabled Sep 4 17:35:22.918346 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:35:22.918496 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:35:22.918572 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:35:22.918638 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:35:22.918713 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:35:22.918780 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:35:22.918812 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:35:22.918822 kernel: PCI host bridge to bus 0000:00 Sep 4 17:35:22.918905 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:35:22.918967 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:35:22.919027 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:35:22.919086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:35:22.919169 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:35:22.919249 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:35:22.919321 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:35:22.919388 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:35:22.919455 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:35:22.919521 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:35:22.919587 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:35:22.919653 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:35:22.919723 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:35:22.919786 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:35:22.919871 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:35:22.919881 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:35:22.919889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:35:22.919896 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:35:22.919904 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:35:22.919911 kernel: iommu: Default domain type: Translated Sep 4 17:35:22.919919 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:35:22.919926 kernel: efivars: Registered efivars operations Sep 4 17:35:22.919936 kernel: vgaarb: loaded Sep 4 17:35:22.919944 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:35:22.919951 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:35:22.919959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:35:22.919966 kernel: pnp: PnP ACPI init Sep 4 17:35:22.920042 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:35:22.920053 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:35:22.920061 kernel: NET: Registered PF_INET protocol family Sep 4 17:35:22.920070 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:35:22.920078 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:35:22.920086 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:35:22.920093 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:35:22.920101 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:35:22.920108 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:35:22.920116 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:35:22.920124 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:35:22.920131 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:35:22.920140 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:35:22.920148 kernel: kvm [1]: HYP mode not available Sep 4 17:35:22.920155 kernel: Initialise system trusted keyrings Sep 4 17:35:22.920162 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:35:22.920170 kernel: Key type asymmetric registered Sep 4 17:35:22.920177 kernel: Asymmetric key parser 'x509' registered Sep 4 17:35:22.920185 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:35:22.920192 kernel: io scheduler mq-deadline registered Sep 4 17:35:22.920200 kernel: io scheduler kyber registered Sep 4 17:35:22.920208 kernel: io scheduler bfq registered Sep 4 17:35:22.920216 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:35:22.920223 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:35:22.920232 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:35:22.920300 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:35:22.920310 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:35:22.920317 kernel: thunder_xcv, ver 1.0 Sep 4 17:35:22.920325 kernel: thunder_bgx, ver 1.0 Sep 4 17:35:22.920332 kernel: nicpf, ver 1.0 Sep 4 17:35:22.920341 kernel: nicvf, ver 1.0 Sep 4 17:35:22.920413 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:35:22.920478 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:35:22 UTC (1725471322) Sep 4 17:35:22.920488 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:35:22.920496 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:35:22.920504 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:35:22.920511 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:35:22.920519 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:35:22.920529 kernel: Segment Routing with IPv6 Sep 4 17:35:22.920536 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:35:22.920544 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:35:22.920551 kernel: Key type dns_resolver registered Sep 4 17:35:22.920558 kernel: registered taskstats version 1 Sep 4 17:35:22.920566 kernel: Loading compiled-in X.509 certificates Sep 4 17:35:22.920573 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:35:22.920580 kernel: Key type .fscrypt registered Sep 4 17:35:22.920588 kernel: Key type fscrypt-provisioning registered Sep 4 17:35:22.920597 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:35:22.920604 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:35:22.920612 kernel: ima: No architecture policies found Sep 4 17:35:22.920619 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:35:22.920627 kernel: clk: Disabling unused clocks Sep 4 17:35:22.920634 kernel: Freeing unused kernel memory: 39040K Sep 4 17:35:22.920641 kernel: Run /init as init process Sep 4 17:35:22.920648 kernel: with arguments: Sep 4 17:35:22.920656 kernel: /init Sep 4 17:35:22.920664 kernel: with environment: Sep 4 17:35:22.920672 kernel: HOME=/ Sep 4 17:35:22.920679 kernel: TERM=linux Sep 4 17:35:22.920686 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:35:22.920696 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:35:22.920715 systemd[1]: Detected virtualization kvm. Sep 4 17:35:22.920723 systemd[1]: Detected architecture arm64. Sep 4 17:35:22.920733 systemd[1]: Running in initrd. Sep 4 17:35:22.920741 systemd[1]: No hostname configured, using default hostname. Sep 4 17:35:22.920749 systemd[1]: Hostname set to <localhost>. Sep 4 17:35:22.920758 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:35:22.920766 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:35:22.920774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:35:22.920782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:35:22.920800 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:35:22.920812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:35:22.920821 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:35:22.920829 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:35:22.920839 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:35:22.920847 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:35:22.920856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:35:22.920864 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:35:22.920873 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:35:22.920882 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:35:22.920890 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:35:22.920898 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:35:22.920906 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:35:22.920914 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:35:22.920923 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:35:22.920931 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:35:22.920939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:35:22.920949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:35:22.920958 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:35:22.920966 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:35:22.920975 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:35:22.920983 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:35:22.920991 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:35:22.921003 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:35:22.921011 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:35:22.921021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:35:22.921030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:35:22.921038 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:35:22.921046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:35:22.921054 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:35:22.921063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:35:22.921073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:35:22.921101 systemd-journald[238]: Collecting audit messages is disabled. Sep 4 17:35:22.921125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:35:22.921135 systemd-journald[238]: Journal started Sep 4 17:35:22.921157 systemd-journald[238]: Runtime Journal (/run/log/journal/9a1aa348672e44a997eacef87d5050e8) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:35:22.911141 systemd-modules-load[239]: Inserted module 'overlay' Sep 4 17:35:22.923821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:35:22.925534 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 4 17:35:22.926493 kernel: Bridge firewalling registered Sep 4 17:35:22.928805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:35:22.931812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:35:22.933905 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:35:22.933981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:35:22.938920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:35:22.941911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:35:22.944467 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:35:22.949167 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:35:22.951955 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:35:22.955123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:35:22.958116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:35:22.964571 dracut-cmdline[273]: dracut-dracut-053 Sep 4 17:35:22.970762 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:35:22.968972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:35:23.003550 systemd-resolved[281]: Positive Trust Anchors: Sep 4 17:35:23.003572 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:35:23.003602 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:35:23.008495 systemd-resolved[281]: Defaulting to hostname 'linux'. Sep 4 17:35:23.009648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:35:23.018992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:35:23.053826 kernel: SCSI subsystem initialized Sep 4 17:35:23.058813 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:35:23.066835 kernel: iscsi: registered transport (tcp) Sep 4 17:35:23.079812 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:35:23.079830 kernel: QLogic iSCSI HBA Driver Sep 4 17:35:23.123339 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:35:23.138029 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:35:23.155820 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:35:23.155876 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:35:23.155888 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:35:23.202820 kernel: raid6: neonx8 gen() 15757 MB/s Sep 4 17:35:23.219807 kernel: raid6: neonx4 gen() 15666 MB/s Sep 4 17:35:23.236805 kernel: raid6: neonx2 gen() 13239 MB/s Sep 4 17:35:23.253814 kernel: raid6: neonx1 gen() 10523 MB/s Sep 4 17:35:23.270803 kernel: raid6: int64x8 gen() 6953 MB/s Sep 4 17:35:23.287813 kernel: raid6: int64x4 gen() 7349 MB/s Sep 4 17:35:23.304808 kernel: raid6: int64x2 gen() 6127 MB/s Sep 4 17:35:23.321810 kernel: raid6: int64x1 gen() 5072 MB/s Sep 4 17:35:23.321834 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s Sep 4 17:35:23.338959 kernel: raid6: .... xor() 12024 MB/s, rmw enabled Sep 4 17:35:23.338982 kernel: raid6: using neon recovery algorithm Sep 4 17:35:23.343808 kernel: xor: measuring software checksum speed Sep 4 17:35:23.344804 kernel: 8regs : 19864 MB/sec Sep 4 17:35:23.345803 kernel: 32regs : 19697 MB/sec Sep 4 17:35:23.345816 kernel: arm64_neon : 27215 MB/sec Sep 4 17:35:23.346807 kernel: xor: using function: arm64_neon (27215 MB/sec) Sep 4 17:35:23.401812 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:35:23.414121 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:35:23.427978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:35:23.439882 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 4 17:35:23.442995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:35:23.445673 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:35:23.460857 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 4 17:35:23.489888 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:35:23.497962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:35:23.538833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:35:23.544940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:35:23.558345 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:35:23.559754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:35:23.562879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:35:23.565221 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:35:23.579982 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:35:23.588842 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:35:23.590667 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:35:23.594375 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:35:23.594415 kernel: GPT:9289727 != 19775487 Sep 4 17:35:23.594426 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:35:23.594436 kernel: GPT:9289727 != 19775487 Sep 4 17:35:23.594453 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:35:23.594463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:35:23.594168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:35:23.602116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:35:23.602246 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:35:23.605854 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:35:23.607125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:35:23.607294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:35:23.609732 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:35:23.621063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:35:23.626788 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Sep 4 17:35:23.628811 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (508) Sep 4 17:35:23.637417 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:35:23.641870 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:35:23.643377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:35:23.651766 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:35:23.655336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:35:23.656368 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:35:23.670947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:35:23.672453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:35:23.677495 disk-uuid[552]: Primary Header is updated. Sep 4 17:35:23.677495 disk-uuid[552]: Secondary Entries is updated. Sep 4 17:35:23.677495 disk-uuid[552]: Secondary Header is updated. Sep 4 17:35:23.680814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:35:23.694768 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:35:24.693696 disk-uuid[553]: The operation has completed successfully. Sep 4 17:35:24.694589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:35:24.721859 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:35:24.721958 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:35:24.755038 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:35:24.758082 sh[574]: Success Sep 4 17:35:24.771849 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:35:24.816343 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:35:24.818192 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:35:24.820829 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:35:24.832249 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:35:24.832295 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:35:24.832306 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:35:24.832316 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:35:24.832808 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:35:24.837418 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:35:24.838345 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:35:24.850964 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:35:24.852698 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:35:24.860811 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:35:24.860851 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:35:24.860862 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:35:24.865976 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:35:24.872979 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:35:24.874827 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:35:24.882491 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:35:24.889964 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:35:24.952210 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:35:24.967022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:35:24.989336 ignition[670]: Ignition 2.18.0 Sep 4 17:35:24.989345 ignition[670]: Stage: fetch-offline Sep 4 17:35:24.989387 ignition[670]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:35:24.989397 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:35:24.989481 ignition[670]: parsed url from cmdline: "" Sep 4 17:35:24.989484 ignition[670]: no config URL provided Sep 4 17:35:24.989489 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:35:24.989497 ignition[670]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:35:24.989529 ignition[670]: op(1): [started] loading QEMU firmware config module Sep 4 17:35:24.989534 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:35:24.998878 systemd-networkd[764]: lo: Link UP Sep 4 17:35:24.998890 systemd-networkd[764]: lo: Gained carrier Sep 4 17:35:24.999550 systemd-networkd[764]: Enumeration completed Sep 4 17:35:25.000078 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:35:25.000082 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:35:25.003517 ignition[670]: op(1): [finished] loading QEMU firmware config module Sep 4 17:35:25.001765 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:35:25.002000 systemd-networkd[764]: eth0: Link UP Sep 4 17:35:25.002004 systemd-networkd[764]: eth0: Gained carrier Sep 4 17:35:25.002011 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:35:25.004736 systemd[1]: Reached target network.target - Network. Sep 4 17:35:25.022843 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:35:25.048525 ignition[670]: parsing config with SHA512: 872b528ba72f9659f7d7734401bf11c3456f385de3218ff44c4bdebfcd875baf7447dac0f0dda06c2d08f815a02aee458d800e1034a6010f412919b60c81a68e Sep 4 17:35:25.054025 unknown[670]: fetched base config from "system" Sep 4 17:35:25.054034 unknown[670]: fetched user config from "qemu" Sep 4 17:35:25.054462 ignition[670]: fetch-offline: fetch-offline passed Sep 4 17:35:25.056296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:35:25.054518 ignition[670]: Ignition finished successfully Sep 4 17:35:25.057769 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:35:25.065991 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:35:25.077291 ignition[772]: Ignition 2.18.0 Sep 4 17:35:25.077301 ignition[772]: Stage: kargs Sep 4 17:35:25.077462 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:35:25.077472 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:35:25.078375 ignition[772]: kargs: kargs passed Sep 4 17:35:25.078420 ignition[772]: Ignition finished successfully Sep 4 17:35:25.081017 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:35:25.091938 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:35:25.104621 ignition[780]: Ignition 2.18.0 Sep 4 17:35:25.104630 ignition[780]: Stage: disks Sep 4 17:35:25.104818 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:35:25.104831 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:35:25.105730 ignition[780]: disks: disks passed Sep 4 17:35:25.105777 ignition[780]: Ignition finished successfully Sep 4 17:35:25.108331 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:35:25.109492 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:35:25.111938 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:35:25.113089 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:35:25.114604 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:35:25.116249 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:35:25.128956 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:35:25.143406 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:35:25.148844 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:35:25.150707 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:35:25.201809 kernel: EXT4-fs (vda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:35:25.202412 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:35:25.203764 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:35:25.219918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:35:25.221590 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:35:25.222574 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:35:25.222618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:35:25.222641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:35:25.231082 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Sep 4 17:35:25.229207 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:35:25.235018 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:35:25.235040 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:35:25.235050 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:35:25.231035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:35:25.237848 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:35:25.239205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:35:25.283495 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:35:25.287056 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:35:25.290960 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:35:25.294037 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:35:25.368049 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:35:25.381914 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:35:25.383335 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:35:25.388828 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:35:25.407382 ignition[912]: INFO : Ignition 2.18.0 Sep 4 17:35:25.407382 ignition[912]: INFO : Stage: mount Sep 4 17:35:25.409158 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:35:25.409158 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:35:25.409158 ignition[912]: INFO : mount: mount passed Sep 4 17:35:25.409158 ignition[912]: INFO : Ignition finished successfully Sep 4 17:35:25.408838 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:35:25.411378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:35:25.423897 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:35:25.829077 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:35:25.837985 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:35:25.844589 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Sep 4 17:35:25.844625 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:35:25.844637 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:35:25.845313 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:35:25.847810 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:35:25.848914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:35:25.866485 ignition[945]: INFO : Ignition 2.18.0 Sep 4 17:35:25.867508 ignition[945]: INFO : Stage: files Sep 4 17:35:25.867508 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:35:25.867508 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:35:25.870088 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:35:25.871160 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:35:25.871160 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:35:25.875248 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:35:25.876445 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:35:25.876445 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:35:25.875920 unknown[945]: wrote ssh authorized keys file for user: core Sep 4 17:35:25.880044 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:35:25.880044 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:35:25.910462 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:35:25.961331 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:35:25.961331 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:35:25.965021 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 17:35:26.050907 systemd-networkd[764]: eth0: Gained IPv6LL Sep 4 17:35:26.259705 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:35:26.319818 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:35:26.321449 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Sep 4 17:35:26.605578 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:35:26.860664 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:35:26.860664 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:35:26.864051 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:35:26.898125 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:35:26.903812 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:35:26.903812 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:35:26.903812 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:35:26.903812 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:35:26.903812 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:35:26.903812 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:35:26.903812 ignition[945]: INFO : files: files passed Sep 4 17:35:26.903812 ignition[945]: INFO : Ignition finished successfully Sep 4 17:35:26.904583 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:35:26.926098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:35:26.928021 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:35:26.933418 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:35:26.934048 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:35:26.936856 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:35:26.939031 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:35:26.939031 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:35:26.942077 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:35:26.940733 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:35:26.943619 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:35:26.952058 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:35:26.972100 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:35:26.972206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:35:26.974415 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:35:26.976018 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:35:26.977575 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:35:26.978350 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:35:26.993695 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:35:26.996140 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:35:27.007754 systemd[1]: Stopped target network.target - Network. Sep 4 17:35:27.009523 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:35:27.010773 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:35:27.012893 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:35:27.014855 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:35:27.014986 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:35:27.017525 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:35:27.019497 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:35:27.021084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:35:27.022918 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:35:27.024916 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:35:27.026673 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:35:27.028615 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:35:27.030515 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:35:27.032567 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:35:27.034374 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:35:27.035969 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:35:27.036105 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:35:27.038667 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:35:27.040931 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:35:27.043070 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:35:27.043229 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:35:27.045155 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:35:27.045289 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:35:27.047981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:35:27.048141 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:35:27.049951 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:35:27.051642 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:35:27.051779 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:35:27.053923 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:35:27.055963 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:35:27.057447 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:35:27.057541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:35:27.059059 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:35:27.059142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:35:27.061234 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:35:27.061348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:35:27.062938 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:35:27.063042 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:35:27.073037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:35:27.074661 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:35:27.075758 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:35:27.077414 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:35:27.078849 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:35:27.078975 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:35:27.080953 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:35:27.081058 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:35:27.087853 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 4 17:35:27.088189 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:35:27.089708 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:35:27.097188 ignition[1001]: INFO : Ignition 2.18.0 Sep 4 17:35:27.097188 ignition[1001]: INFO : Stage: umount Sep 4 17:35:27.097188 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:35:27.097188 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:35:27.092253 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:35:27.108029 ignition[1001]: INFO : umount: umount passed Sep 4 17:35:27.108029 ignition[1001]: INFO : Ignition finished successfully Sep 4 17:35:27.092349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:35:27.096192 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:35:27.098188 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:35:27.101855 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:35:27.102344 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:35:27.102436 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:35:27.105114 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:35:27.105159 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:35:27.107142 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:35:27.107197 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:35:27.109029 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:35:27.109073 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:35:27.111319 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:35:27.111363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:35:27.112627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:35:27.112675 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:35:27.123915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:35:27.125198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:35:27.125263 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:35:27.126984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:35:27.127027 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:35:27.128766 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:35:27.128821 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:35:27.130297 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:35:27.130336 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:35:27.132071 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:35:27.142119 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:35:27.142226 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:35:27.155449 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:35:27.155576 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:35:27.157105 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:35:27.157148 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:35:27.158855 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:35:27.158884 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:35:27.161397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:35:27.161442 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:35:27.164588 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:35:27.164637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:35:27.166973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:35:27.167019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:35:27.179015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:35:27.180092 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:35:27.180164 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:35:27.182514 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:35:27.182564 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:35:27.184649 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:35:27.184710 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:35:27.187011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:35:27.187054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:35:27.189547 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:35:27.189642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:35:27.191466 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:35:27.191548 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:35:27.194155 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:35:27.195411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:35:27.195469 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:35:27.212204 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:35:27.219726 systemd[1]: Switching root. Sep 4 17:35:27.246940 systemd-journald[238]: Journal stopped Sep 4 17:35:28.010904 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 4 17:35:28.010960 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:35:28.010972 kernel: SELinux: policy capability open_perms=1 Sep 4 17:35:28.010989 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:35:28.010998 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:35:28.011008 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:35:28.011018 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:35:28.011027 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:35:28.011037 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:35:28.011046 kernel: audit: type=1403 audit(1725471327.414:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:35:28.011056 systemd[1]: Successfully loaded SELinux policy in 42.487ms. Sep 4 17:35:28.011073 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.563ms. Sep 4 17:35:28.011086 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:35:28.011097 systemd[1]: Detected virtualization kvm. Sep 4 17:35:28.011109 systemd[1]: Detected architecture arm64. Sep 4 17:35:28.011119 systemd[1]: Detected first boot. Sep 4 17:35:28.011143 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:35:28.011154 zram_generator::config[1047]: No configuration found. Sep 4 17:35:28.011166 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:35:28.011177 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:35:28.011188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:35:28.011200 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:35:28.011213 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:35:28.011224 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:35:28.011234 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:35:28.011245 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:35:28.011256 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:35:28.011268 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:35:28.011279 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:35:28.011291 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:35:28.011302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:35:28.011312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:35:28.011323 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:35:28.011333 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:35:28.011344 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:35:28.011356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:35:28.011367 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:35:28.011378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:35:28.011391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:35:28.011401 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:35:28.011412 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:35:28.011422 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:35:28.011432 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:35:28.011443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:35:28.011454 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:35:28.011465 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:35:28.011477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:35:28.011487 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:35:28.011497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:35:28.011508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:35:28.011518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:35:28.011528 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:35:28.011538 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:35:28.011548 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:35:28.011559 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:35:28.011570 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:35:28.011581 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:35:28.011591 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:35:28.011602 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:35:28.011612 systemd[1]: Reached target machines.target - Containers. Sep 4 17:35:28.011622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:35:28.011633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:35:28.011643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:35:28.011655 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:35:28.011665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:35:28.011676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:35:28.011692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:35:28.011704 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:35:28.011714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:35:28.011728 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:35:28.011738 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:35:28.011749 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:35:28.011760 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:35:28.011770 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:35:28.011780 kernel: loop: module loaded Sep 4 17:35:28.011796 kernel: fuse: init (API version 7.39) Sep 4 17:35:28.011808 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:35:28.011819 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:35:28.011830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:35:28.011840 kernel: ACPI: bus type drm_connector registered Sep 4 17:35:28.011849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:35:28.011864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:35:28.011875 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:35:28.011885 systemd[1]: Stopped verity-setup.service. Sep 4 17:35:28.011895 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:35:28.011905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:35:28.011934 systemd-journald[1113]: Collecting audit messages is disabled. Sep 4 17:35:28.011958 systemd-journald[1113]: Journal started Sep 4 17:35:28.011979 systemd-journald[1113]: Runtime Journal (/run/log/journal/9a1aa348672e44a997eacef87d5050e8) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:35:27.811341 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:35:27.828579 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:35:27.828976 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:35:28.013520 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:35:28.015179 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:35:28.015815 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:35:28.016743 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:35:28.018105 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:35:28.019964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:35:28.021328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:35:28.022749 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:35:28.022941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:35:28.024327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:35:28.024479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:35:28.025935 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:35:28.026075 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:35:28.027453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:35:28.027611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:35:28.029250 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:35:28.029392 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:35:28.031005 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:35:28.031165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:35:28.032739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:35:28.036210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:35:28.037521 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:35:28.050183 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:35:28.057905 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:35:28.059893 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:35:28.060805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:35:28.060841 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:35:28.062628 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:35:28.064993 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:35:28.068019 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:35:28.068903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:35:28.070627 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:35:28.072678 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:35:28.073938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:35:28.076065 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:35:28.076985 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:35:28.081087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:35:28.085981 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:35:28.087348 systemd-journald[1113]: Time spent on flushing to /var/log/journal/9a1aa348672e44a997eacef87d5050e8 is 29.258ms for 859 entries. Sep 4 17:35:28.087348 systemd-journald[1113]: System Journal (/var/log/journal/9a1aa348672e44a997eacef87d5050e8) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:35:28.125415 systemd-journald[1113]: Received client request to flush runtime journal. Sep 4 17:35:28.125460 kernel: loop0: detected capacity change from 0 to 194512 Sep 4 17:35:28.125480 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:35:28.125640 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:35:28.093242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:35:28.097274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:35:28.099349 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:35:28.102160 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:35:28.104569 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:35:28.108220 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:35:28.113552 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:35:28.125992 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:35:28.128180 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:35:28.130350 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:35:28.134832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:35:28.142938 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Sep 4 17:35:28.142955 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Sep 4 17:35:28.143875 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:35:28.148150 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:35:28.158988 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:35:28.161817 kernel: loop1: detected capacity change from 0 to 59688 Sep 4 17:35:28.164611 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:35:28.165352 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:35:28.195820 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:35:28.198471 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:35:28.205000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:35:28.221807 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:35:28.221826 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 4 17:35:28.224839 kernel: loop3: detected capacity change from 0 to 194512 Sep 4 17:35:28.227479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:35:28.243309 kernel: loop4: detected capacity change from 0 to 59688 Sep 4 17:35:28.248949 kernel: loop5: detected capacity change from 0 to 113672 Sep 4 17:35:28.251584 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:35:28.252092 (sd-merge)[1183]: Merged extensions into '/usr'. Sep 4 17:35:28.256408 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:35:28.256424 systemd[1]: Reloading... Sep 4 17:35:28.314829 zram_generator::config[1213]: No configuration found. Sep 4 17:35:28.354878 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:35:28.412107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:35:28.450744 systemd[1]: Reloading finished in 193 ms. Sep 4 17:35:28.480523 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:35:28.483034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:35:28.495094 systemd[1]: Starting ensure-sysext.service... Sep 4 17:35:28.497505 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:35:28.509662 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:35:28.509831 systemd[1]: Reloading... Sep 4 17:35:28.518674 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:35:28.520642 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:35:28.521322 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:35:28.521538 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 4 17:35:28.521581 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 4 17:35:28.527827 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:35:28.527841 systemd-tmpfiles[1247]: Skipping /boot Sep 4 17:35:28.534698 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:35:28.534714 systemd-tmpfiles[1247]: Skipping /boot Sep 4 17:35:28.560821 zram_generator::config[1275]: No configuration found. Sep 4 17:35:28.645354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:35:28.684021 systemd[1]: Reloading finished in 173 ms. Sep 4 17:35:28.698149 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:35:28.699562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:35:28.719713 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:35:28.722532 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:35:28.726022 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:35:28.728857 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:35:28.739543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:35:28.742186 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:35:28.754050 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:35:28.760306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:35:28.761772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:35:28.764905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:35:28.770122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:35:28.771105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:35:28.771912 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:35:28.773483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:35:28.773638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:35:28.782471 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Sep 4 17:35:28.784491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:35:28.784869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:35:28.786219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:35:28.796307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:35:28.802512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:35:28.804921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:35:28.805649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:35:28.809387 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:35:28.813246 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:35:28.815080 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:35:28.815224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:35:28.816956 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:35:28.817101 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:35:28.818360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:35:28.818502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:35:28.819712 augenrules[1342]: No rules Sep 4 17:35:28.820109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:35:28.824810 systemd[1]: Finished ensure-sysext.service. Sep 4 17:35:28.826089 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:35:28.841431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:35:28.842480 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:35:28.842560 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:35:28.846012 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:35:28.848226 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:35:28.850890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:35:28.865225 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1341) Sep 4 17:35:28.869845 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:35:28.872094 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:35:28.900845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1344) Sep 4 17:35:28.925199 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:35:28.935965 systemd-resolved[1312]: Positive Trust Anchors: Sep 4 17:35:28.936202 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:35:28.936237 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:35:28.936697 systemd-networkd[1373]: lo: Link UP Sep 4 17:35:28.936701 systemd-networkd[1373]: lo: Gained carrier Sep 4 17:35:28.937433 systemd-networkd[1373]: Enumeration completed Sep 4 17:35:28.941851 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:35:28.942997 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:35:28.944001 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:35:28.944244 systemd-resolved[1312]: Defaulting to hostname 'linux'. Sep 4 17:35:28.945368 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:35:28.948006 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:35:28.948275 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:35:28.948285 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:35:28.949143 systemd-networkd[1373]: eth0: Link UP Sep 4 17:35:28.949154 systemd-networkd[1373]: eth0: Gained carrier Sep 4 17:35:28.949168 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:35:28.950497 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:35:28.951695 systemd[1]: Reached target network.target - Network. Sep 4 17:35:28.952851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:35:28.960630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:35:28.964899 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:35:28.965871 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Sep 4 17:35:28.967134 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:35:28.967204 systemd-timesyncd[1375]: Initial clock synchronization to Wed 2024-09-04 17:35:28.582881 UTC. Sep 4 17:35:29.004648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:35:29.012223 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:35:29.015605 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:35:29.036140 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:35:29.046869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:35:29.078480 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:35:29.080361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:35:29.081277 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:35:29.082204 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:35:29.083228 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:35:29.084350 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:35:29.085319 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:35:29.086354 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:35:29.087507 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:35:29.087542 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:35:29.088385 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:35:29.090014 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:35:29.092608 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:35:29.099862 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:35:29.101950 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:35:29.103399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:35:29.104576 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:35:29.105546 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:35:29.106446 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:35:29.106487 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:35:29.107516 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:35:29.109504 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:35:29.110938 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:35:29.113815 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:35:29.116152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:35:29.117412 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:35:29.118999 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:35:29.121886 jq[1406]: false Sep 4 17:35:29.123932 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:35:29.126671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:35:29.132112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:35:29.138031 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:35:29.142802 dbus-daemon[1405]: [system] SELinux support is enabled Sep 4 17:35:29.143252 extend-filesystems[1407]: Found loop3 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found loop4 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found loop5 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda1 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda2 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda3 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found usr Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda4 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda6 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda7 Sep 4 17:35:29.143252 extend-filesystems[1407]: Found vda9 Sep 4 17:35:29.143252 extend-filesystems[1407]: Checking size of /dev/vda9 Sep 4 17:35:29.156682 extend-filesystems[1407]: Resized partition /dev/vda9 Sep 4 17:35:29.145820 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:35:29.158126 extend-filesystems[1426]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:35:29.159534 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:35:29.146319 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:35:29.147028 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:35:29.149852 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:35:29.153376 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:35:29.157427 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:35:29.161935 jq[1422]: true Sep 4 17:35:29.172380 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:35:29.172552 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:35:29.172846 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:35:29.172977 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:35:29.177108 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:35:29.177282 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:35:29.184939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1347) Sep 4 17:35:29.185811 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:35:29.202377 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:35:29.202656 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:35:29.206078 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:35:29.209594 extend-filesystems[1426]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:35:29.209594 extend-filesystems[1426]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:35:29.209594 extend-filesystems[1426]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:35:29.206096 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:35:29.218290 update_engine[1420]: I0904 17:35:29.216681 1420 main.cc:92] Flatcar Update Engine starting Sep 4 17:35:29.230147 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Sep 4 17:35:29.231097 tar[1431]: linux-arm64/helm Sep 4 17:35:29.231288 update_engine[1420]: I0904 17:35:29.222275 1420 update_check_scheduler.cc:74] Next update check in 4m49s Sep 4 17:35:29.224222 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:35:29.224841 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:35:29.225097 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:35:29.231723 jq[1432]: true Sep 4 17:35:29.226736 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:35:29.239734 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:35:29.243627 systemd-logind[1413]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:35:29.243983 systemd-logind[1413]: New seat seat0. Sep 4 17:35:29.244714 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:35:29.312174 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:35:29.316458 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:35:29.318856 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:35:29.320726 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:35:29.439170 containerd[1433]: time="2024-09-04T17:35:29.439044785Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:35:29.468158 containerd[1433]: time="2024-09-04T17:35:29.468111442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:35:29.468158 containerd[1433]: time="2024-09-04T17:35:29.468159118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469672 containerd[1433]: time="2024-09-04T17:35:29.469384086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469672 containerd[1433]: time="2024-09-04T17:35:29.469420147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469672 containerd[1433]: time="2024-09-04T17:35:29.469646413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469672 containerd[1433]: time="2024-09-04T17:35:29.469663473Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:35:29.469831 containerd[1433]: time="2024-09-04T17:35:29.469732472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469831 containerd[1433]: time="2024-09-04T17:35:29.469784793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469831 containerd[1433]: time="2024-09-04T17:35:29.469811943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.469886 containerd[1433]: time="2024-09-04T17:35:29.469873137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470081391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470108161Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470118328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470211165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470224417Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470274072Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:35:29.470473 containerd[1433]: time="2024-09-04T17:35:29.470286524Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:35:29.475835 containerd[1433]: time="2024-09-04T17:35:29.475780432Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:35:29.475835 containerd[1433]: time="2024-09-04T17:35:29.475830506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:35:29.475835 containerd[1433]: time="2024-09-04T17:35:29.475845395Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:35:29.475973 containerd[1433]: time="2024-09-04T17:35:29.475876887Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:35:29.475973 containerd[1433]: time="2024-09-04T17:35:29.475893565Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:35:29.475973 containerd[1433]: time="2024-09-04T17:35:29.475904037Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:35:29.475973 containerd[1433]: time="2024-09-04T17:35:29.475916603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476062789Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476085103Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476099459Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476113054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476126115Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476143060Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476157720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476164 containerd[1433]: time="2024-09-04T17:35:29.476169639Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476400 containerd[1433]: time="2024-09-04T17:35:29.476183805Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476400 containerd[1433]: time="2024-09-04T17:35:29.476198998Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476400 containerd[1433]: time="2024-09-04T17:35:29.476211031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476400 containerd[1433]: time="2024-09-04T17:35:29.476222417Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:35:29.476400 containerd[1433]: time="2024-09-04T17:35:29.476316967Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:35:29.476566 containerd[1433]: time="2024-09-04T17:35:29.476538245Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:35:29.476601 containerd[1433]: time="2024-09-04T17:35:29.476568899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.476601 containerd[1433]: time="2024-09-04T17:35:29.476583026Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:35:29.476688 containerd[1433]: time="2024-09-04T17:35:29.476604655Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:35:29.477359 containerd[1433]: time="2024-09-04T17:35:29.477325303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477451 containerd[1433]: time="2024-09-04T17:35:29.477436076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477484 containerd[1433]: time="2024-09-04T17:35:29.477454430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477484 containerd[1433]: time="2024-09-04T17:35:29.477468138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477484 containerd[1433]: time="2024-09-04T17:35:29.477480476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477534 containerd[1433]: time="2024-09-04T17:35:29.477517641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477534 containerd[1433]: time="2024-09-04T17:35:29.477531540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477580 containerd[1433]: time="2024-09-04T17:35:29.477544106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477580 containerd[1433]: time="2024-09-04T17:35:29.477560975Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:35:29.477775 containerd[1433]: time="2024-09-04T17:35:29.477755712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477824 containerd[1433]: time="2024-09-04T17:35:29.477789108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477845 containerd[1433]: time="2024-09-04T17:35:29.477823493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477845 containerd[1433]: time="2024-09-04T17:35:29.477838001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477891 containerd[1433]: time="2024-09-04T17:35:29.477855784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477891 containerd[1433]: time="2024-09-04T17:35:29.477883582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477925 containerd[1433]: time="2024-09-04T17:35:29.477895463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.477925 containerd[1433]: time="2024-09-04T17:35:29.477908486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:35:29.478407 containerd[1433]: time="2024-09-04T17:35:29.478330784Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:35:29.478407 containerd[1433]: time="2024-09-04T17:35:29.478392281Z" level=info msg="Connect containerd service" Sep 4 17:35:29.478648 containerd[1433]: time="2024-09-04T17:35:29.478428495Z" level=info msg="using legacy CRI server" Sep 4 17:35:29.478648 containerd[1433]: time="2024-09-04T17:35:29.478435768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:35:29.478648 containerd[1433]: time="2024-09-04T17:35:29.478608152Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:35:29.479455 containerd[1433]: time="2024-09-04T17:35:29.479426207Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:35:29.479507 containerd[1433]: time="2024-09-04T17:35:29.479488999Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:35:29.479552 containerd[1433]: time="2024-09-04T17:35:29.479507087Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:35:29.479662 containerd[1433]: time="2024-09-04T17:35:29.479516911Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:35:29.479662 containerd[1433]: time="2024-09-04T17:35:29.479648437Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:35:29.479797 containerd[1433]: time="2024-09-04T17:35:29.479619725Z" level=info msg="Start subscribing containerd event" Sep 4 17:35:29.479826 containerd[1433]: time="2024-09-04T17:35:29.479815566Z" level=info msg="Start recovering state" Sep 4 17:35:29.479923 containerd[1433]: time="2024-09-04T17:35:29.479875655Z" level=info msg="Start event monitor" Sep 4 17:35:29.479923 containerd[1433]: time="2024-09-04T17:35:29.479891648Z" level=info msg="Start snapshots syncer" Sep 4 17:35:29.479923 containerd[1433]: time="2024-09-04T17:35:29.479900064Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:35:29.479923 containerd[1433]: time="2024-09-04T17:35:29.479907718Z" level=info msg="Start streaming server" Sep 4 17:35:29.480709 containerd[1433]: time="2024-09-04T17:35:29.480686893Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:35:29.480767 containerd[1433]: time="2024-09-04T17:35:29.480739519Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:35:29.482142 containerd[1433]: time="2024-09-04T17:35:29.481860797Z" level=info msg="containerd successfully booted in 0.043906s" Sep 4 17:35:29.481994 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:35:29.563081 tar[1431]: linux-arm64/LICENSE Sep 4 17:35:29.563195 tar[1431]: linux-arm64/README.md Sep 4 17:35:29.576080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:35:29.837991 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:35:29.856587 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:35:29.867116 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:35:29.874172 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:35:29.874386 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:35:29.877155 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:35:29.895665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:35:29.911142 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:35:29.913280 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:35:29.914428 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:35:30.529902 systemd-networkd[1373]: eth0: Gained IPv6LL Sep 4 17:35:30.532290 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:35:30.534023 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:35:30.545097 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:35:30.547559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:35:30.549590 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:35:30.564980 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:35:30.565180 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:35:30.567401 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:35:30.569669 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:35:31.026466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:35:31.028023 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:35:31.031660 (kubelet)[1518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:35:31.032170 systemd[1]: Startup finished in 559ms (kernel) + 4.696s (initrd) + 3.661s (userspace) = 8.917s. Sep 4 17:35:31.522471 kubelet[1518]: E0904 17:35:31.522299 1518 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:35:31.524917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:35:31.525055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:35:35.351457 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:35:35.352571 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:55014.service - OpenSSH per-connection server daemon (10.0.0.1:55014). Sep 4 17:35:35.410621 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 55014 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:35.412434 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:35.431272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:35:35.448067 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:35:35.449777 systemd-logind[1413]: New session 1 of user core. Sep 4 17:35:35.457502 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:35:35.468619 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:35:35.473932 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:35.567141 systemd[1536]: Queued start job for default target default.target. Sep 4 17:35:35.582847 systemd[1536]: Created slice app.slice - User Application Slice. Sep 4 17:35:35.582886 systemd[1536]: Reached target paths.target - Paths. Sep 4 17:35:35.582899 systemd[1536]: Reached target timers.target - Timers. Sep 4 17:35:35.584482 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:35:35.594474 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:35:35.594544 systemd[1536]: Reached target sockets.target - Sockets. Sep 4 17:35:35.594556 systemd[1536]: Reached target basic.target - Basic System. Sep 4 17:35:35.594597 systemd[1536]: Reached target default.target - Main User Target. Sep 4 17:35:35.594625 systemd[1536]: Startup finished in 113ms. Sep 4 17:35:35.594918 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:35:35.596562 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:35:35.654957 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:55020.service - OpenSSH per-connection server daemon (10.0.0.1:55020). Sep 4 17:35:35.707764 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 55020 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:35.709136 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:35.713473 systemd-logind[1413]: New session 2 of user core. Sep 4 17:35:35.719927 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:35:35.776340 sshd[1547]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:35.786107 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:55020.service: Deactivated successfully. Sep 4 17:35:35.787915 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:35:35.795374 systemd-logind[1413]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:35:35.806040 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). Sep 4 17:35:35.810253 systemd-logind[1413]: Removed session 2. Sep 4 17:35:35.843594 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:35.844280 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:35.849859 systemd-logind[1413]: New session 3 of user core. Sep 4 17:35:35.857971 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:35:35.910150 sshd[1554]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:35.915618 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:55032.service: Deactivated successfully. Sep 4 17:35:35.919379 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:35:35.922575 systemd-logind[1413]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:35:35.929221 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:55034.service - OpenSSH per-connection server daemon (10.0.0.1:55034). Sep 4 17:35:35.942192 systemd-logind[1413]: Removed session 3. Sep 4 17:35:35.971931 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 55034 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:35.973430 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:35.976865 systemd-logind[1413]: New session 4 of user core. Sep 4 17:35:35.984955 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:35:36.036260 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:36.053117 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:55034.service: Deactivated successfully. Sep 4 17:35:36.054422 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:35:36.056167 systemd-logind[1413]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:35:36.072561 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:55048.service - OpenSSH per-connection server daemon (10.0.0.1:55048). Sep 4 17:35:36.073528 systemd-logind[1413]: Removed session 4. Sep 4 17:35:36.108756 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 55048 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:36.109956 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:36.113927 systemd-logind[1413]: New session 5 of user core. Sep 4 17:35:36.120929 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:35:36.185294 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:35:36.185526 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:35:36.199546 sudo[1571]: pam_unix(sudo:session): session closed for user root Sep 4 17:35:36.202934 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:36.216157 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:55048.service: Deactivated successfully. Sep 4 17:35:36.217698 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:35:36.219940 systemd-logind[1413]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:35:36.221248 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:55052.service - OpenSSH per-connection server daemon (10.0.0.1:55052). Sep 4 17:35:36.223949 systemd-logind[1413]: Removed session 5. Sep 4 17:35:36.285609 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 55052 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:36.287423 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:36.291377 systemd-logind[1413]: New session 6 of user core. Sep 4 17:35:36.300032 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:35:36.351484 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:35:36.351735 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:35:36.354851 sudo[1580]: pam_unix(sudo:session): session closed for user root Sep 4 17:35:36.359208 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:35:36.359449 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:35:36.377231 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:35:36.378111 auditctl[1583]: No rules Sep 4 17:35:36.378418 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:35:36.378578 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:35:36.381032 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:35:36.403008 augenrules[1601]: No rules Sep 4 17:35:36.404264 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:35:36.405226 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 4 17:35:36.406942 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:36.417141 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:55052.service: Deactivated successfully. Sep 4 17:35:36.418546 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:35:36.419887 systemd-logind[1413]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:35:36.421023 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Sep 4 17:35:36.421900 systemd-logind[1413]: Removed session 6. Sep 4 17:35:36.459843 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:35:36.460935 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:35:36.465845 systemd-logind[1413]: New session 7 of user core. Sep 4 17:35:36.475979 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:35:36.528223 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:35:36.528570 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:35:36.634060 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:35:36.634167 (dockerd)[1622]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:35:36.861692 dockerd[1622]: time="2024-09-04T17:35:36.861565863Z" level=info msg="Starting up" Sep 4 17:35:36.950362 dockerd[1622]: time="2024-09-04T17:35:36.950310263Z" level=info msg="Loading containers: start." Sep 4 17:35:37.031818 kernel: Initializing XFRM netlink socket Sep 4 17:35:37.093207 systemd-networkd[1373]: docker0: Link UP Sep 4 17:35:37.110079 dockerd[1622]: time="2024-09-04T17:35:37.110039239Z" level=info msg="Loading containers: done." Sep 4 17:35:37.167073 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3412898458-merged.mount: Deactivated successfully. Sep 4 17:35:37.169388 dockerd[1622]: time="2024-09-04T17:35:37.169323253Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:35:37.169547 dockerd[1622]: time="2024-09-04T17:35:37.169528843Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:35:37.169656 dockerd[1622]: time="2024-09-04T17:35:37.169641710Z" level=info msg="Daemon has completed initialization" Sep 4 17:35:37.196126 dockerd[1622]: time="2024-09-04T17:35:37.195991354Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:35:37.196207 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:35:37.818772 containerd[1433]: time="2024-09-04T17:35:37.818719920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:35:38.511245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069385576.mount: Deactivated successfully. Sep 4 17:35:39.963645 containerd[1433]: time="2024-09-04T17:35:39.963572838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:39.964185 containerd[1433]: time="2024-09-04T17:35:39.964141996Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=32283564" Sep 4 17:35:39.965101 containerd[1433]: time="2024-09-04T17:35:39.965065263Z" level=info msg="ImageCreate event name:\"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:39.967948 containerd[1433]: time="2024-09-04T17:35:39.967910301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:39.970071 containerd[1433]: time="2024-09-04T17:35:39.970030691Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"32280362\" in 2.151261553s" Sep 4 17:35:39.970144 containerd[1433]: time="2024-09-04T17:35:39.970075399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\"" Sep 4 17:35:39.989212 containerd[1433]: time="2024-09-04T17:35:39.989171073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:35:41.776879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:35:41.786020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:35:41.885953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:35:41.890479 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:35:41.941114 kubelet[1837]: E0904 17:35:41.941058 1837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:35:41.945463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:35:41.945609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:35:42.182322 containerd[1433]: time="2024-09-04T17:35:42.181578029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:42.183841 containerd[1433]: time="2024-09-04T17:35:42.183799983Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=29368212" Sep 4 17:35:42.184917 containerd[1433]: time="2024-09-04T17:35:42.184880322Z" level=info msg="ImageCreate event name:\"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:42.187892 containerd[1433]: time="2024-09-04T17:35:42.187842439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:42.189037 containerd[1433]: time="2024-09-04T17:35:42.188929718Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"30855477\" in 2.199574395s" Sep 4 17:35:42.189037 containerd[1433]: time="2024-09-04T17:35:42.188961765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\"" Sep 4 17:35:42.209652 containerd[1433]: time="2024-09-04T17:35:42.209611946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:35:43.317783 containerd[1433]: time="2024-09-04T17:35:43.317733831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:43.318731 containerd[1433]: time="2024-09-04T17:35:43.318568325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=15751075" Sep 4 17:35:43.321518 containerd[1433]: time="2024-09-04T17:35:43.321475953Z" level=info msg="ImageCreate event name:\"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:43.325866 containerd[1433]: time="2024-09-04T17:35:43.325830982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:43.326919 containerd[1433]: time="2024-09-04T17:35:43.326885395Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"17238358\" in 1.11723108s" Sep 4 17:35:43.326971 containerd[1433]: time="2024-09-04T17:35:43.326919739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\"" Sep 4 17:35:43.348022 containerd[1433]: time="2024-09-04T17:35:43.347984225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:35:44.407044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290131944.mount: Deactivated successfully. Sep 4 17:35:45.290368 containerd[1433]: time="2024-09-04T17:35:45.290317475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:45.291228 containerd[1433]: time="2024-09-04T17:35:45.291176019Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=25251885" Sep 4 17:35:45.292868 containerd[1433]: time="2024-09-04T17:35:45.292224680Z" level=info msg="ImageCreate event name:\"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:45.295749 containerd[1433]: time="2024-09-04T17:35:45.295708376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:45.296419 containerd[1433]: time="2024-09-04T17:35:45.296368412Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"25250902\" in 1.948338543s" Sep 4 17:35:45.296419 containerd[1433]: time="2024-09-04T17:35:45.296415543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\"" Sep 4 17:35:45.315162 containerd[1433]: time="2024-09-04T17:35:45.315063452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:35:45.839084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011860187.mount: Deactivated successfully. Sep 4 17:35:46.368151 containerd[1433]: time="2024-09-04T17:35:46.368084298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:46.368938 containerd[1433]: time="2024-09-04T17:35:46.368900628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Sep 4 17:35:46.371998 containerd[1433]: time="2024-09-04T17:35:46.371959657Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:46.378310 containerd[1433]: time="2024-09-04T17:35:46.377871661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:46.379167 containerd[1433]: time="2024-09-04T17:35:46.379028215Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.06392331s" Sep 4 17:35:46.379167 containerd[1433]: time="2024-09-04T17:35:46.379069767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:35:46.400591 containerd[1433]: time="2024-09-04T17:35:46.400349433Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:35:46.922059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046971686.mount: Deactivated successfully. Sep 4 17:35:46.941144 containerd[1433]: time="2024-09-04T17:35:46.939556889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:46.942364 containerd[1433]: time="2024-09-04T17:35:46.942306818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:35:46.944616 containerd[1433]: time="2024-09-04T17:35:46.943339350Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:46.950154 containerd[1433]: time="2024-09-04T17:35:46.948704203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:46.950154 containerd[1433]: time="2024-09-04T17:35:46.949576693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 549.181807ms" Sep 4 17:35:46.950154 containerd[1433]: time="2024-09-04T17:35:46.949618007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:35:46.974336 containerd[1433]: time="2024-09-04T17:35:46.974296685Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:35:47.561140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752857259.mount: Deactivated successfully. Sep 4 17:35:49.104406 containerd[1433]: time="2024-09-04T17:35:49.104356938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:49.105457 containerd[1433]: time="2024-09-04T17:35:49.105412738Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Sep 4 17:35:49.107071 containerd[1433]: time="2024-09-04T17:35:49.107017387Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:49.110817 containerd[1433]: time="2024-09-04T17:35:49.109878088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:49.112180 containerd[1433]: time="2024-09-04T17:35:49.112099401Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.137761865s" Sep 4 17:35:49.112180 containerd[1433]: time="2024-09-04T17:35:49.112136318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:35:52.085368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:35:52.095001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:35:52.209732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:35:52.213968 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:35:52.257308 kubelet[2064]: E0904 17:35:52.257046 2064 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:35:52.260373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:35:52.260500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:35:55.434557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:35:55.447113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:35:55.463626 systemd[1]: Reloading requested from client PID 2081 ('systemctl') (unit session-7.scope)... Sep 4 17:35:55.463643 systemd[1]: Reloading... Sep 4 17:35:55.534820 zram_generator::config[2118]: No configuration found. Sep 4 17:35:55.660119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:35:55.716772 systemd[1]: Reloading finished in 252 ms. Sep 4 17:35:55.770693 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:35:55.770776 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:35:55.771008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:35:55.773501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:35:55.878902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:35:55.885020 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:35:55.931348 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:35:55.931348 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:35:55.931348 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:35:55.932159 kubelet[2164]: I0904 17:35:55.932103 2164 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:35:56.654821 kubelet[2164]: I0904 17:35:56.653774 2164 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:35:56.654821 kubelet[2164]: I0904 17:35:56.653827 2164 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:35:56.654821 kubelet[2164]: I0904 17:35:56.654037 2164 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:35:56.692755 kubelet[2164]: I0904 17:35:56.692641 2164 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:35:56.693174 kubelet[2164]: E0904 17:35:56.693145 2164 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.704780 kubelet[2164]: I0904 17:35:56.704747 2164 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:35:56.704998 kubelet[2164]: I0904 17:35:56.704985 2164 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:35:56.705188 kubelet[2164]: I0904 17:35:56.705171 2164 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:35:56.705273 kubelet[2164]: I0904 17:35:56.705196 2164 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:35:56.705273 kubelet[2164]: I0904 17:35:56.705205 2164 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:35:56.706380 kubelet[2164]: I0904 17:35:56.706341 2164 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:35:56.710454 kubelet[2164]: I0904 17:35:56.710424 2164 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:35:56.710493 kubelet[2164]: I0904 17:35:56.710457 2164 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:35:56.710493 kubelet[2164]: I0904 17:35:56.710490 2164 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:35:56.710544 kubelet[2164]: I0904 17:35:56.710507 2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:35:56.712264 kubelet[2164]: W0904 17:35:56.712089 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.712264 kubelet[2164]: E0904 17:35:56.712153 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.712264 kubelet[2164]: I0904 17:35:56.712168 2164 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:35:56.712264 kubelet[2164]: W0904 17:35:56.712163 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.712264 kubelet[2164]: E0904 17:35:56.712207 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.712826 kubelet[2164]: I0904 17:35:56.712807 2164 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:35:56.713379 kubelet[2164]: W0904 17:35:56.713362 2164 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:35:56.714829 kubelet[2164]: I0904 17:35:56.714425 2164 server.go:1256] "Started kubelet" Sep 4 17:35:56.715233 kubelet[2164]: I0904 17:35:56.715215 2164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:35:56.715598 kubelet[2164]: I0904 17:35:56.715520 2164 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:35:56.715598 kubelet[2164]: I0904 17:35:56.715586 2164 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:35:56.716235 kubelet[2164]: I0904 17:35:56.716203 2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:35:56.717765 kubelet[2164]: I0904 17:35:56.716464 2164 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:35:56.721267 kubelet[2164]: E0904 17:35:56.721242 2164 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:35:56.721324 kubelet[2164]: I0904 17:35:56.721278 2164 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:35:56.721394 kubelet[2164]: I0904 17:35:56.721377 2164 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:35:56.721700 kubelet[2164]: I0904 17:35:56.721663 2164 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:35:56.722089 kubelet[2164]: E0904 17:35:56.722054 2164 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21b0e9712c097 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:35:56.714401943 +0000 UTC m=+0.822555830,LastTimestamp:2024-09-04 17:35:56.714401943 +0000 UTC m=+0.822555830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:35:56.722215 kubelet[2164]: E0904 17:35:56.722163 2164 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:35:56.722531 kubelet[2164]: W0904 17:35:56.722258 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.722574 kubelet[2164]: E0904 17:35:56.722556 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.723017 kubelet[2164]: I0904 17:35:56.722971 2164 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:35:56.723074 kubelet[2164]: I0904 17:35:56.723057 2164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:35:56.723647 kubelet[2164]: E0904 17:35:56.723532 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Sep 4 17:35:56.723827 kubelet[2164]: I0904 17:35:56.723809 2164 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:35:56.734540 kubelet[2164]: I0904 17:35:56.734510 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:35:56.736308 kubelet[2164]: I0904 17:35:56.736020 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:35:56.736308 kubelet[2164]: I0904 17:35:56.736047 2164 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:35:56.736308 kubelet[2164]: I0904 17:35:56.736064 2164 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:35:56.736308 kubelet[2164]: E0904 17:35:56.736112 2164 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:35:56.736914 kubelet[2164]: W0904 17:35:56.736861 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.736990 kubelet[2164]: E0904 17:35:56.736924 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:56.738608 kubelet[2164]: I0904 17:35:56.738511 2164 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:35:56.738608 kubelet[2164]: I0904 17:35:56.738528 2164 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:35:56.738608 kubelet[2164]: I0904 17:35:56.738546 2164 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:35:56.799093 kubelet[2164]: I0904 17:35:56.799038 2164 policy_none.go:49] "None policy: Start" Sep 4 17:35:56.799847 kubelet[2164]: I0904 17:35:56.799822 2164 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:35:56.799926 kubelet[2164]: I0904 17:35:56.799876 2164 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:35:56.809841 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:35:56.822544 kubelet[2164]: I0904 17:35:56.822506 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:35:56.822944 kubelet[2164]: E0904 17:35:56.822927 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 4 17:35:56.827393 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:35:56.830044 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:35:56.837234 kubelet[2164]: E0904 17:35:56.837202 2164 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:35:56.839581 kubelet[2164]: I0904 17:35:56.839562 2164 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:35:56.839987 kubelet[2164]: I0904 17:35:56.839859 2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:35:56.841639 kubelet[2164]: E0904 17:35:56.841617 2164 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:35:56.924906 kubelet[2164]: E0904 17:35:56.924777 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Sep 4 17:35:57.023978 kubelet[2164]: I0904 17:35:57.023945 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:35:57.024342 kubelet[2164]: E0904 17:35:57.024303 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 4 17:35:57.037475 kubelet[2164]: I0904 17:35:57.037341 2164 topology_manager.go:215] "Topology Admit Handler" podUID="89b578c765991e5e85d58151c73f048e" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:35:57.038388 kubelet[2164]: I0904 17:35:57.038357 2164 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:35:57.039377 kubelet[2164]: I0904 17:35:57.039289 2164 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:35:57.044115 systemd[1]: Created slice kubepods-burstable-pod89b578c765991e5e85d58151c73f048e.slice - libcontainer container kubepods-burstable-pod89b578c765991e5e85d58151c73f048e.slice. Sep 4 17:35:57.056630 systemd[1]: Created slice kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice - libcontainer container kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice. Sep 4 17:35:57.069026 systemd[1]: Created slice kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice - libcontainer container kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice. Sep 4 17:35:57.123330 kubelet[2164]: I0904 17:35:57.123291 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:35:57.123330 kubelet[2164]: I0904 17:35:57.123334 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:35:57.123483 kubelet[2164]: I0904 17:35:57.123355 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:35:57.123483 kubelet[2164]: I0904 17:35:57.123375 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89b578c765991e5e85d58151c73f048e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89b578c765991e5e85d58151c73f048e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:35:57.123483 kubelet[2164]: I0904 17:35:57.123394 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89b578c765991e5e85d58151c73f048e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89b578c765991e5e85d58151c73f048e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:35:57.123483 kubelet[2164]: I0904 17:35:57.123415 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:35:57.123483 kubelet[2164]: I0904 17:35:57.123442 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:35:57.123591 kubelet[2164]: I0904 17:35:57.123460 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89b578c765991e5e85d58151c73f048e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89b578c765991e5e85d58151c73f048e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:35:57.123591 kubelet[2164]: I0904 17:35:57.123479 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:35:57.325441 kubelet[2164]: E0904 17:35:57.325331 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Sep 4 17:35:57.357414 kubelet[2164]: E0904 17:35:57.357371 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:57.358059 containerd[1433]: time="2024-09-04T17:35:57.358012661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89b578c765991e5e85d58151c73f048e,Namespace:kube-system,Attempt:0,}" Sep 4 17:35:57.367238 kubelet[2164]: E0904 17:35:57.367212 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:57.371361 kubelet[2164]: E0904 17:35:57.371261 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:57.371561 containerd[1433]: time="2024-09-04T17:35:57.371387969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,}" Sep 4 17:35:57.371636 containerd[1433]: time="2024-09-04T17:35:57.371606320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,}" Sep 4 17:35:57.426285 kubelet[2164]: I0904 17:35:57.426037 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:35:57.426513 kubelet[2164]: E0904 17:35:57.426498 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 4 17:35:57.577543 kubelet[2164]: W0904 17:35:57.577385 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.577543 kubelet[2164]: E0904 17:35:57.577456 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.618896 kubelet[2164]: W0904 17:35:57.618841 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.619074 kubelet[2164]: E0904 17:35:57.619054 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.807433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551573230.mount: Deactivated successfully. Sep 4 17:35:57.812601 containerd[1433]: time="2024-09-04T17:35:57.812555869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:35:57.814996 containerd[1433]: time="2024-09-04T17:35:57.814895674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:35:57.815753 containerd[1433]: time="2024-09-04T17:35:57.815654047Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:35:57.817577 containerd[1433]: time="2024-09-04T17:35:57.817441444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:35:57.818000 containerd[1433]: time="2024-09-04T17:35:57.817932962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:35:57.818925 containerd[1433]: time="2024-09-04T17:35:57.818879480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:35:57.821073 containerd[1433]: time="2024-09-04T17:35:57.821035615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:35:57.821984 containerd[1433]: time="2024-09-04T17:35:57.821932469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:35:57.822889 containerd[1433]: time="2024-09-04T17:35:57.822851019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 451.373352ms" Sep 4 17:35:57.826371 containerd[1433]: time="2024-09-04T17:35:57.826329043Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 468.213739ms" Sep 4 17:35:57.831594 containerd[1433]: time="2024-09-04T17:35:57.831469766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 459.779781ms" Sep 4 17:35:57.834579 kubelet[2164]: W0904 17:35:57.834353 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.834579 kubelet[2164]: E0904 17:35:57.834533 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.942542 kubelet[2164]: W0904 17:35:57.942433 2164 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:57.942542 kubelet[2164]: E0904 17:35:57.942493 2164 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018315596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018671960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018700012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018715156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018804467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018861051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018890941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:58.019046 containerd[1433]: time="2024-09-04T17:35:58.018910281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:58.019597 containerd[1433]: time="2024-09-04T17:35:58.018539252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:58.019597 containerd[1433]: time="2024-09-04T17:35:58.019574297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:58.019671 containerd[1433]: time="2024-09-04T17:35:58.019592199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:58.019671 containerd[1433]: time="2024-09-04T17:35:58.019605066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:58.052047 systemd[1]: Started cri-containerd-5878c2e7d7abd2c6d9de3f5bee8e1331e9b44c90dc955288fbb1749e4a93bd3d.scope - libcontainer container 5878c2e7d7abd2c6d9de3f5bee8e1331e9b44c90dc955288fbb1749e4a93bd3d. Sep 4 17:35:58.055738 systemd[1]: Started cri-containerd-2fad53452fe663e25caa8218ba35325828e8aca608d2204b57ac19cc48bbe412.scope - libcontainer container 2fad53452fe663e25caa8218ba35325828e8aca608d2204b57ac19cc48bbe412. Sep 4 17:35:58.057278 systemd[1]: Started cri-containerd-fbf641abf2381bcab4965a0b2f58c64f2ee32ffc014325e39a8400e75fca039d.scope - libcontainer container fbf641abf2381bcab4965a0b2f58c64f2ee32ffc014325e39a8400e75fca039d. Sep 4 17:35:58.094762 containerd[1433]: time="2024-09-04T17:35:58.094721613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5878c2e7d7abd2c6d9de3f5bee8e1331e9b44c90dc955288fbb1749e4a93bd3d\"" Sep 4 17:35:58.095119 containerd[1433]: time="2024-09-04T17:35:58.095010883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbf641abf2381bcab4965a0b2f58c64f2ee32ffc014325e39a8400e75fca039d\"" Sep 4 17:35:58.096198 kubelet[2164]: E0904 17:35:58.096178 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:58.096774 kubelet[2164]: E0904 17:35:58.096514 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:58.097666 containerd[1433]: time="2024-09-04T17:35:58.097637736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89b578c765991e5e85d58151c73f048e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fad53452fe663e25caa8218ba35325828e8aca608d2204b57ac19cc48bbe412\"" Sep 4 17:35:58.098841 kubelet[2164]: E0904 17:35:58.098346 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:58.100079 containerd[1433]: time="2024-09-04T17:35:58.100035737Z" level=info msg="CreateContainer within sandbox \"fbf641abf2381bcab4965a0b2f58c64f2ee32ffc014325e39a8400e75fca039d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:35:58.100485 containerd[1433]: time="2024-09-04T17:35:58.100456396Z" level=info msg="CreateContainer within sandbox \"5878c2e7d7abd2c6d9de3f5bee8e1331e9b44c90dc955288fbb1749e4a93bd3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:35:58.101030 containerd[1433]: time="2024-09-04T17:35:58.101004528Z" level=info msg="CreateContainer within sandbox \"2fad53452fe663e25caa8218ba35325828e8aca608d2204b57ac19cc48bbe412\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:35:58.120161 containerd[1433]: time="2024-09-04T17:35:58.120115453Z" level=info msg="CreateContainer within sandbox \"fbf641abf2381bcab4965a0b2f58c64f2ee32ffc014325e39a8400e75fca039d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea18186937c9b6115ec4ed7c0388d74aa4d08c13d884b99451e7b47c1f630fea\"" Sep 4 17:35:58.121874 containerd[1433]: time="2024-09-04T17:35:58.121024584Z" level=info msg="StartContainer for \"ea18186937c9b6115ec4ed7c0388d74aa4d08c13d884b99451e7b47c1f630fea\"" Sep 4 17:35:58.125504 containerd[1433]: time="2024-09-04T17:35:58.125434972Z" level=info msg="CreateContainer within sandbox \"2fad53452fe663e25caa8218ba35325828e8aca608d2204b57ac19cc48bbe412\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc72e5d5161f45d773a61c203b04d3775b63548018fa135ef9a55a06651bc761\"" Sep 4 17:35:58.127102 containerd[1433]: time="2024-09-04T17:35:58.126945301Z" level=info msg="StartContainer for \"cc72e5d5161f45d773a61c203b04d3775b63548018fa135ef9a55a06651bc761\"" Sep 4 17:35:58.127190 kubelet[2164]: E0904 17:35:58.127117 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Sep 4 17:35:58.133442 containerd[1433]: time="2024-09-04T17:35:58.133352333Z" level=info msg="CreateContainer within sandbox \"5878c2e7d7abd2c6d9de3f5bee8e1331e9b44c90dc955288fbb1749e4a93bd3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8d8c0b414d33716a021cf57c400bd471063265bef0fd114245ef083a9f6cdbf\"" Sep 4 17:35:58.133870 containerd[1433]: time="2024-09-04T17:35:58.133828057Z" level=info msg="StartContainer for \"d8d8c0b414d33716a021cf57c400bd471063265bef0fd114245ef083a9f6cdbf\"" Sep 4 17:35:58.151996 systemd[1]: Started cri-containerd-ea18186937c9b6115ec4ed7c0388d74aa4d08c13d884b99451e7b47c1f630fea.scope - libcontainer container ea18186937c9b6115ec4ed7c0388d74aa4d08c13d884b99451e7b47c1f630fea. Sep 4 17:35:58.155937 systemd[1]: Started cri-containerd-cc72e5d5161f45d773a61c203b04d3775b63548018fa135ef9a55a06651bc761.scope - libcontainer container cc72e5d5161f45d773a61c203b04d3775b63548018fa135ef9a55a06651bc761. Sep 4 17:35:58.169031 systemd[1]: Started cri-containerd-d8d8c0b414d33716a021cf57c400bd471063265bef0fd114245ef083a9f6cdbf.scope - libcontainer container d8d8c0b414d33716a021cf57c400bd471063265bef0fd114245ef083a9f6cdbf. Sep 4 17:35:58.196375 containerd[1433]: time="2024-09-04T17:35:58.196237194Z" level=info msg="StartContainer for \"ea18186937c9b6115ec4ed7c0388d74aa4d08c13d884b99451e7b47c1f630fea\" returns successfully" Sep 4 17:35:58.230757 containerd[1433]: time="2024-09-04T17:35:58.224579245Z" level=info msg="StartContainer for \"cc72e5d5161f45d773a61c203b04d3775b63548018fa135ef9a55a06651bc761\" returns successfully" Sep 4 17:35:58.232043 kubelet[2164]: I0904 17:35:58.232002 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:35:58.232374 kubelet[2164]: E0904 17:35:58.232345 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 4 17:35:58.239125 containerd[1433]: time="2024-09-04T17:35:58.238941280Z" level=info msg="StartContainer for \"d8d8c0b414d33716a021cf57c400bd471063265bef0fd114245ef083a9f6cdbf\" returns successfully" Sep 4 17:35:58.747217 kubelet[2164]: E0904 17:35:58.746977 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:58.747770 kubelet[2164]: E0904 17:35:58.747748 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:58.752369 kubelet[2164]: E0904 17:35:58.752278 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:59.754042 kubelet[2164]: E0904 17:35:59.754010 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:35:59.833948 kubelet[2164]: I0904 17:35:59.833909 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:36:00.057381 kubelet[2164]: E0904 17:36:00.057284 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:00.137806 kubelet[2164]: I0904 17:36:00.137369 2164 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:36:00.713655 kubelet[2164]: I0904 17:36:00.713469 2164 apiserver.go:52] "Watching apiserver" Sep 4 17:36:00.721652 kubelet[2164]: I0904 17:36:00.721596 2164 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:36:02.405742 kubelet[2164]: E0904 17:36:02.405690 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:02.757747 kubelet[2164]: E0904 17:36:02.757609 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:02.896317 kubelet[2164]: E0904 17:36:02.896262 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:03.020110 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-7.scope)... Sep 4 17:36:03.020132 systemd[1]: Reloading... Sep 4 17:36:03.097832 zram_generator::config[2487]: No configuration found. Sep 4 17:36:03.198975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:36:03.270490 systemd[1]: Reloading finished in 249 ms. Sep 4 17:36:03.308108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:36:03.317933 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:36:03.318196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:36:03.318320 systemd[1]: kubelet.service: Consumed 1.255s CPU time, 118.0M memory peak, 0B memory swap peak. Sep 4 17:36:03.327156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:36:03.439609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:36:03.445413 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:36:03.495052 kubelet[2526]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:36:03.495052 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:36:03.495052 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:36:03.495406 kubelet[2526]: I0904 17:36:03.495102 2526 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:36:03.499817 kubelet[2526]: I0904 17:36:03.499767 2526 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:36:03.499817 kubelet[2526]: I0904 17:36:03.499810 2526 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:36:03.500034 kubelet[2526]: I0904 17:36:03.500016 2526 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:36:03.501616 kubelet[2526]: I0904 17:36:03.501588 2526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:36:03.503674 kubelet[2526]: I0904 17:36:03.503638 2526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:36:03.516195 kubelet[2526]: I0904 17:36:03.516163 2526 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:36:03.516386 kubelet[2526]: I0904 17:36:03.516373 2526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:36:03.516549 kubelet[2526]: I0904 17:36:03.516530 2526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:36:03.516643 kubelet[2526]: I0904 17:36:03.516558 2526 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:36:03.516643 kubelet[2526]: I0904 17:36:03.516567 2526 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:36:03.516643 kubelet[2526]: I0904 17:36:03.516594 2526 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:36:03.516723 kubelet[2526]: I0904 17:36:03.516688 2526 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:36:03.516723 kubelet[2526]: I0904 17:36:03.516704 2526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:36:03.516723 kubelet[2526]: I0904 17:36:03.516723 2526 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:36:03.519844 kubelet[2526]: I0904 17:36:03.516740 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:36:03.520237 kubelet[2526]: I0904 17:36:03.520207 2526 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:36:03.520424 kubelet[2526]: I0904 17:36:03.520410 2526 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:36:03.520955 kubelet[2526]: I0904 17:36:03.520922 2526 server.go:1256] "Started kubelet" Sep 4 17:36:03.521945 kubelet[2526]: I0904 17:36:03.521904 2526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:36:03.522710 kubelet[2526]: I0904 17:36:03.522676 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:36:03.522855 kubelet[2526]: I0904 17:36:03.522826 2526 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:36:03.522855 kubelet[2526]: I0904 17:36:03.522862 2526 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:36:03.524269 kubelet[2526]: I0904 17:36:03.524243 2526 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:36:03.524846 kubelet[2526]: I0904 17:36:03.524783 2526 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:36:03.525011 kubelet[2526]: I0904 17:36:03.524996 2526 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:36:03.528174 sudo[2541]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:36:03.528473 sudo[2541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 4 17:36:03.530045 kubelet[2526]: I0904 17:36:03.529996 2526 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:36:03.531096 kubelet[2526]: I0904 17:36:03.530783 2526 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:36:03.535625 kubelet[2526]: I0904 17:36:03.530805 2526 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:36:03.537309 kubelet[2526]: E0904 17:36:03.530476 2526 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:36:03.537550 kubelet[2526]: I0904 17:36:03.537253 2526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:36:03.556827 kubelet[2526]: I0904 17:36:03.556784 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:36:03.565426 kubelet[2526]: I0904 17:36:03.565313 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:36:03.565426 kubelet[2526]: I0904 17:36:03.565347 2526 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:36:03.565426 kubelet[2526]: I0904 17:36:03.565366 2526 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:36:03.565573 kubelet[2526]: E0904 17:36:03.565478 2526 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:36:03.600371 kubelet[2526]: I0904 17:36:03.600342 2526 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:36:03.600371 kubelet[2526]: I0904 17:36:03.600365 2526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:36:03.600371 kubelet[2526]: I0904 17:36:03.600382 2526 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:36:03.600588 kubelet[2526]: I0904 17:36:03.600531 2526 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:36:03.600588 kubelet[2526]: I0904 17:36:03.600552 2526 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:36:03.600588 kubelet[2526]: I0904 17:36:03.600558 2526 policy_none.go:49] "None policy: Start" Sep 4 17:36:03.601540 kubelet[2526]: I0904 17:36:03.601520 2526 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:36:03.601650 kubelet[2526]: I0904 17:36:03.601638 2526 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:36:03.602245 kubelet[2526]: I0904 17:36:03.601906 2526 state_mem.go:75] "Updated machine memory state" Sep 4 17:36:03.606243 kubelet[2526]: I0904 17:36:03.606211 2526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:36:03.606462 kubelet[2526]: I0904 17:36:03.606443 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:36:03.627945 kubelet[2526]: I0904 17:36:03.627915 2526 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:36:03.634944 kubelet[2526]: I0904 17:36:03.634776 2526 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:36:03.634944 kubelet[2526]: I0904 17:36:03.634899 2526 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:36:03.665812 kubelet[2526]: I0904 17:36:03.665769 2526 topology_manager.go:215] "Topology Admit Handler" podUID="89b578c765991e5e85d58151c73f048e" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:36:03.665926 kubelet[2526]: I0904 17:36:03.665875 2526 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:36:03.665950 kubelet[2526]: I0904 17:36:03.665939 2526 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:36:03.670869 kubelet[2526]: E0904 17:36:03.670840 2526 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:36:03.671502 kubelet[2526]: E0904 17:36:03.671478 2526 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:36:03.826897 kubelet[2526]: I0904 17:36:03.826767 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89b578c765991e5e85d58151c73f048e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89b578c765991e5e85d58151c73f048e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:36:03.826897 kubelet[2526]: I0904 17:36:03.826835 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89b578c765991e5e85d58151c73f048e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89b578c765991e5e85d58151c73f048e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:36:03.826897 kubelet[2526]: I0904 17:36:03.826861 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:36:03.826897 kubelet[2526]: I0904 17:36:03.826889 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:36:03.827268 kubelet[2526]: I0904 17:36:03.826912 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:36:03.827268 kubelet[2526]: I0904 17:36:03.826934 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89b578c765991e5e85d58151c73f048e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89b578c765991e5e85d58151c73f048e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:36:03.827268 kubelet[2526]: I0904 17:36:03.826951 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:36:03.827268 kubelet[2526]: I0904 17:36:03.826973 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:36:03.827268 kubelet[2526]: I0904 17:36:03.826993 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:36:03.975517 kubelet[2526]: E0904 17:36:03.975098 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:03.975517 kubelet[2526]: E0904 17:36:03.975100 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:03.975517 kubelet[2526]: E0904 17:36:03.975286 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:03.991673 sudo[2541]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:04.517858 kubelet[2526]: I0904 17:36:04.517495 2526 apiserver.go:52] "Watching apiserver" Sep 4 17:36:04.529681 kubelet[2526]: I0904 17:36:04.525705 2526 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:36:04.581479 kubelet[2526]: E0904 17:36:04.581432 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:04.581632 kubelet[2526]: E0904 17:36:04.581507 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:04.587081 kubelet[2526]: E0904 17:36:04.587044 2526 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:36:04.587540 kubelet[2526]: E0904 17:36:04.587521 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:04.605171 kubelet[2526]: I0904 17:36:04.604916 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6048599430000001 podStartE2EDuration="1.604859943s" podCreationTimestamp="2024-09-04 17:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:36:04.604331409 +0000 UTC m=+1.154252172" watchObservedRunningTime="2024-09-04 17:36:04.604859943 +0000 UTC m=+1.154780666" Sep 4 17:36:04.612665 kubelet[2526]: I0904 17:36:04.612546 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.612441763 podStartE2EDuration="2.612441763s" podCreationTimestamp="2024-09-04 17:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:36:04.611824319 +0000 UTC m=+1.161745082" watchObservedRunningTime="2024-09-04 17:36:04.612441763 +0000 UTC m=+1.162362526" Sep 4 17:36:04.633086 kubelet[2526]: I0904 17:36:04.632875 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.632800915 podStartE2EDuration="2.632800915s" podCreationTimestamp="2024-09-04 17:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:36:04.624172991 +0000 UTC m=+1.174093754" watchObservedRunningTime="2024-09-04 17:36:04.632800915 +0000 UTC m=+1.182721718" Sep 4 17:36:05.582131 kubelet[2526]: E0904 17:36:05.582104 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:05.883602 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:05.885054 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:05.889016 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:55068.service: Deactivated successfully. Sep 4 17:36:05.891033 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:36:05.891853 systemd[1]: session-7.scope: Consumed 8.860s CPU time, 137.2M memory peak, 0B memory swap peak. Sep 4 17:36:05.892726 systemd-logind[1413]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:36:05.894164 systemd-logind[1413]: Removed session 7. Sep 4 17:36:08.098404 kubelet[2526]: E0904 17:36:08.098364 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:08.586664 kubelet[2526]: E0904 17:36:08.586626 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:12.238020 kubelet[2526]: E0904 17:36:12.237960 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:12.594535 kubelet[2526]: E0904 17:36:12.594270 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:13.779398 kubelet[2526]: E0904 17:36:13.779362 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:14.122407 update_engine[1420]: I0904 17:36:14.121845 1420 update_attempter.cc:509] Updating boot flags... Sep 4 17:36:14.141927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2615) Sep 4 17:36:14.174836 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2619) Sep 4 17:36:14.598397 kubelet[2526]: E0904 17:36:14.598014 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:16.952945 kubelet[2526]: I0904 17:36:16.952897 2526 topology_manager.go:215] "Topology Admit Handler" podUID="37ddff9a-9494-4100-b55d-db9bb70394b0" podNamespace="kube-system" podName="kube-proxy-8m2jr" Sep 4 17:36:16.962188 kubelet[2526]: I0904 17:36:16.960815 2526 topology_manager.go:215] "Topology Admit Handler" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" podNamespace="kube-system" podName="cilium-lngm5" Sep 4 17:36:16.965563 systemd[1]: Created slice kubepods-besteffort-pod37ddff9a_9494_4100_b55d_db9bb70394b0.slice - libcontainer container kubepods-besteffort-pod37ddff9a_9494_4100_b55d_db9bb70394b0.slice. Sep 4 17:36:16.972378 kubelet[2526]: I0904 17:36:16.972320 2526 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:36:16.973715 containerd[1433]: time="2024-09-04T17:36:16.973679535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:36:16.974582 kubelet[2526]: I0904 17:36:16.974330 2526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:36:16.991097 systemd[1]: Created slice kubepods-burstable-podb4b7c297_d666_405a_b774_78173dbe9e3b.slice - libcontainer container kubepods-burstable-podb4b7c297_d666_405a_b774_78173dbe9e3b.slice. Sep 4 17:36:17.002822 kubelet[2526]: I0904 17:36:17.001398 2526 topology_manager.go:215] "Topology Admit Handler" podUID="0743c9b6-754f-49a3-905c-9101a0da0546" podNamespace="kube-system" podName="cilium-operator-5cc964979-q2tbl" Sep 4 17:36:17.014018 systemd[1]: Created slice kubepods-besteffort-pod0743c9b6_754f_49a3_905c_9101a0da0546.slice - libcontainer container kubepods-besteffort-pod0743c9b6_754f_49a3_905c_9101a0da0546.slice. Sep 4 17:36:17.016662 kubelet[2526]: I0904 17:36:17.016616 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tcn\" (UniqueName: \"kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-kube-api-access-w5tcn\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.016662 kubelet[2526]: I0904 17:36:17.016660 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhj8h\" (UniqueName: \"kubernetes.io/projected/37ddff9a-9494-4100-b55d-db9bb70394b0-kube-api-access-bhj8h\") pod \"kube-proxy-8m2jr\" (UID: \"37ddff9a-9494-4100-b55d-db9bb70394b0\") " pod="kube-system/kube-proxy-8m2jr" Sep 4 17:36:17.016781 kubelet[2526]: I0904 17:36:17.016681 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-hubble-tls\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.016781 kubelet[2526]: I0904 17:36:17.016700 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37ddff9a-9494-4100-b55d-db9bb70394b0-xtables-lock\") pod \"kube-proxy-8m2jr\" (UID: \"37ddff9a-9494-4100-b55d-db9bb70394b0\") " pod="kube-system/kube-proxy-8m2jr" Sep 4 17:36:17.016781 kubelet[2526]: I0904 17:36:17.016720 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-xtables-lock\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.016781 kubelet[2526]: I0904 17:36:17.016740 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-etc-cni-netd\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.016781 kubelet[2526]: I0904 17:36:17.016759 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-net\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.016781 kubelet[2526]: I0904 17:36:17.016776 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37ddff9a-9494-4100-b55d-db9bb70394b0-lib-modules\") pod \"kube-proxy-8m2jr\" (UID: \"37ddff9a-9494-4100-b55d-db9bb70394b0\") " pod="kube-system/kube-proxy-8m2jr" Sep 4 17:36:17.017029 kubelet[2526]: I0904 17:36:17.016806 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-hostproc\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017029 kubelet[2526]: I0904 17:36:17.016827 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-cgroup\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017029 kubelet[2526]: I0904 17:36:17.016847 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-run\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017029 kubelet[2526]: I0904 17:36:17.016864 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cni-path\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017029 kubelet[2526]: I0904 17:36:17.016890 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-lib-modules\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017029 kubelet[2526]: I0904 17:36:17.016910 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37ddff9a-9494-4100-b55d-db9bb70394b0-kube-proxy\") pod \"kube-proxy-8m2jr\" (UID: \"37ddff9a-9494-4100-b55d-db9bb70394b0\") " pod="kube-system/kube-proxy-8m2jr" Sep 4 17:36:17.017201 kubelet[2526]: I0904 17:36:17.016927 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-bpf-maps\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017201 kubelet[2526]: I0904 17:36:17.016948 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4b7c297-d666-405a-b774-78173dbe9e3b-clustermesh-secrets\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017201 kubelet[2526]: I0904 17:36:17.016966 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-config-path\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.017201 kubelet[2526]: I0904 17:36:17.016988 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-kernel\") pod \"cilium-lngm5\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " pod="kube-system/cilium-lngm5" Sep 4 17:36:17.118096 kubelet[2526]: I0904 17:36:17.118046 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjmc\" (UniqueName: \"kubernetes.io/projected/0743c9b6-754f-49a3-905c-9101a0da0546-kube-api-access-bhjmc\") pod \"cilium-operator-5cc964979-q2tbl\" (UID: \"0743c9b6-754f-49a3-905c-9101a0da0546\") " pod="kube-system/cilium-operator-5cc964979-q2tbl" Sep 4 17:36:17.119574 kubelet[2526]: I0904 17:36:17.118257 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0743c9b6-754f-49a3-905c-9101a0da0546-cilium-config-path\") pod \"cilium-operator-5cc964979-q2tbl\" (UID: \"0743c9b6-754f-49a3-905c-9101a0da0546\") " pod="kube-system/cilium-operator-5cc964979-q2tbl" Sep 4 17:36:17.283882 kubelet[2526]: E0904 17:36:17.282913 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.284030 containerd[1433]: time="2024-09-04T17:36:17.283403086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8m2jr,Uid:37ddff9a-9494-4100-b55d-db9bb70394b0,Namespace:kube-system,Attempt:0,}" Sep 4 17:36:17.294402 kubelet[2526]: E0904 17:36:17.294069 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.294949 containerd[1433]: time="2024-09-04T17:36:17.294886230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lngm5,Uid:b4b7c297-d666-405a-b774-78173dbe9e3b,Namespace:kube-system,Attempt:0,}" Sep 4 17:36:17.310814 containerd[1433]: time="2024-09-04T17:36:17.310673848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:17.310814 containerd[1433]: time="2024-09-04T17:36:17.310738458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:17.311088 containerd[1433]: time="2024-09-04T17:36:17.310896044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:17.311088 containerd[1433]: time="2024-09-04T17:36:17.310937611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:17.318269 kubelet[2526]: E0904 17:36:17.316749 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.319019 containerd[1433]: time="2024-09-04T17:36:17.318939418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-q2tbl,Uid:0743c9b6-754f-49a3-905c-9101a0da0546,Namespace:kube-system,Attempt:0,}" Sep 4 17:36:17.331188 containerd[1433]: time="2024-09-04T17:36:17.330883519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:17.331188 containerd[1433]: time="2024-09-04T17:36:17.330951370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:17.331188 containerd[1433]: time="2024-09-04T17:36:17.330965572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:17.331188 containerd[1433]: time="2024-09-04T17:36:17.330975654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:17.333029 systemd[1]: Started cri-containerd-51073d5414d11328f89c92821a4e898845a8b644b4291387ccc0e72c5118c373.scope - libcontainer container 51073d5414d11328f89c92821a4e898845a8b644b4291387ccc0e72c5118c373. Sep 4 17:36:17.354003 systemd[1]: Started cri-containerd-30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56.scope - libcontainer container 30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56. Sep 4 17:36:17.359180 containerd[1433]: time="2024-09-04T17:36:17.359073153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:17.359180 containerd[1433]: time="2024-09-04T17:36:17.359126602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:17.359180 containerd[1433]: time="2024-09-04T17:36:17.359141484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:17.359180 containerd[1433]: time="2024-09-04T17:36:17.359151806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:17.378124 systemd[1]: Started cri-containerd-76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3.scope - libcontainer container 76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3. Sep 4 17:36:17.379659 containerd[1433]: time="2024-09-04T17:36:17.379484297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8m2jr,Uid:37ddff9a-9494-4100-b55d-db9bb70394b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"51073d5414d11328f89c92821a4e898845a8b644b4291387ccc0e72c5118c373\"" Sep 4 17:36:17.380203 kubelet[2526]: E0904 17:36:17.380143 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.384704 containerd[1433]: time="2024-09-04T17:36:17.384407754Z" level=info msg="CreateContainer within sandbox \"51073d5414d11328f89c92821a4e898845a8b644b4291387ccc0e72c5118c373\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:36:17.394251 containerd[1433]: time="2024-09-04T17:36:17.394212500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lngm5,Uid:b4b7c297-d666-405a-b774-78173dbe9e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\"" Sep 4 17:36:17.395744 kubelet[2526]: E0904 17:36:17.395707 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.397871 containerd[1433]: time="2024-09-04T17:36:17.397828139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:36:17.406058 containerd[1433]: time="2024-09-04T17:36:17.406009496Z" level=info msg="CreateContainer within sandbox \"51073d5414d11328f89c92821a4e898845a8b644b4291387ccc0e72c5118c373\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2eb4d6847765404f16f6612cca415230420599f57822030c55807f78908b05f5\"" Sep 4 17:36:17.406877 containerd[1433]: time="2024-09-04T17:36:17.406843754Z" level=info msg="StartContainer for \"2eb4d6847765404f16f6612cca415230420599f57822030c55807f78908b05f5\"" Sep 4 17:36:17.431566 containerd[1433]: time="2024-09-04T17:36:17.431493321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-q2tbl,Uid:0743c9b6-754f-49a3-905c-9101a0da0546,Namespace:kube-system,Attempt:0,} returns sandbox id \"76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3\"" Sep 4 17:36:17.432605 kubelet[2526]: E0904 17:36:17.432399 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.443999 systemd[1]: Started cri-containerd-2eb4d6847765404f16f6612cca415230420599f57822030c55807f78908b05f5.scope - libcontainer container 2eb4d6847765404f16f6612cca415230420599f57822030c55807f78908b05f5. Sep 4 17:36:17.473455 containerd[1433]: time="2024-09-04T17:36:17.473341421Z" level=info msg="StartContainer for \"2eb4d6847765404f16f6612cca415230420599f57822030c55807f78908b05f5\" returns successfully" Sep 4 17:36:17.604895 kubelet[2526]: E0904 17:36:17.604844 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:17.615264 kubelet[2526]: I0904 17:36:17.615221 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8m2jr" podStartSLOduration=1.6151817400000001 podStartE2EDuration="1.61518174s" podCreationTimestamp="2024-09-04 17:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:36:17.614868928 +0000 UTC m=+14.164789771" watchObservedRunningTime="2024-09-04 17:36:17.61518174 +0000 UTC m=+14.165102503" Sep 4 17:36:25.197155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750559719.mount: Deactivated successfully. Sep 4 17:36:26.566293 containerd[1433]: time="2024-09-04T17:36:26.566239006Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:26.566932 containerd[1433]: time="2024-09-04T17:36:26.566886678Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651558" Sep 4 17:36:26.567724 containerd[1433]: time="2024-09-04T17:36:26.567692447Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:26.569324 containerd[1433]: time="2024-09-04T17:36:26.569287224Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.171410878s" Sep 4 17:36:26.569374 containerd[1433]: time="2024-09-04T17:36:26.569328149Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 17:36:26.572249 containerd[1433]: time="2024-09-04T17:36:26.572223349Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:36:26.574042 containerd[1433]: time="2024-09-04T17:36:26.573757680Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:36:26.599137 containerd[1433]: time="2024-09-04T17:36:26.599089008Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\"" Sep 4 17:36:26.599706 containerd[1433]: time="2024-09-04T17:36:26.599674153Z" level=info msg="StartContainer for \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\"" Sep 4 17:36:26.639003 systemd[1]: Started cri-containerd-01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4.scope - libcontainer container 01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4. Sep 4 17:36:26.668885 containerd[1433]: time="2024-09-04T17:36:26.668733648Z" level=info msg="StartContainer for \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\" returns successfully" Sep 4 17:36:26.766690 systemd[1]: cri-containerd-01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4.scope: Deactivated successfully. Sep 4 17:36:26.865327 containerd[1433]: time="2024-09-04T17:36:26.865264955Z" level=info msg="shim disconnected" id=01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4 namespace=k8s.io Sep 4 17:36:26.865327 containerd[1433]: time="2024-09-04T17:36:26.865317601Z" level=warning msg="cleaning up after shim disconnected" id=01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4 namespace=k8s.io Sep 4 17:36:26.865327 containerd[1433]: time="2024-09-04T17:36:26.865326002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:36:27.596343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4-rootfs.mount: Deactivated successfully. Sep 4 17:36:27.647218 kubelet[2526]: E0904 17:36:27.647171 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:27.649120 containerd[1433]: time="2024-09-04T17:36:27.649087739Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:36:27.677952 containerd[1433]: time="2024-09-04T17:36:27.677895527Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\"" Sep 4 17:36:27.678496 containerd[1433]: time="2024-09-04T17:36:27.678447226Z" level=info msg="StartContainer for \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\"" Sep 4 17:36:27.707007 systemd[1]: Started cri-containerd-ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850.scope - libcontainer container ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850. Sep 4 17:36:27.731711 containerd[1433]: time="2024-09-04T17:36:27.731659773Z" level=info msg="StartContainer for \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\" returns successfully" Sep 4 17:36:27.758070 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:36:27.758734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:27.758827 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:36:27.764111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:36:27.767608 systemd[1]: cri-containerd-ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850.scope: Deactivated successfully. Sep 4 17:36:27.799866 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:27.815830 containerd[1433]: time="2024-09-04T17:36:27.815642797Z" level=info msg="shim disconnected" id=ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850 namespace=k8s.io Sep 4 17:36:27.815830 containerd[1433]: time="2024-09-04T17:36:27.815699883Z" level=warning msg="cleaning up after shim disconnected" id=ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850 namespace=k8s.io Sep 4 17:36:27.815830 containerd[1433]: time="2024-09-04T17:36:27.815709564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:36:28.090861 containerd[1433]: time="2024-09-04T17:36:28.090786930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:28.091632 containerd[1433]: time="2024-09-04T17:36:28.091597533Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138330" Sep 4 17:36:28.092380 containerd[1433]: time="2024-09-04T17:36:28.092338929Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:28.094460 containerd[1433]: time="2024-09-04T17:36:28.094418822Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.522157188s" Sep 4 17:36:28.094521 containerd[1433]: time="2024-09-04T17:36:28.094460306Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 17:36:28.097779 containerd[1433]: time="2024-09-04T17:36:28.097745843Z" level=info msg="CreateContainer within sandbox \"76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:36:28.106259 containerd[1433]: time="2024-09-04T17:36:28.106209310Z" level=info msg="CreateContainer within sandbox \"76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\"" Sep 4 17:36:28.106748 containerd[1433]: time="2024-09-04T17:36:28.106722322Z" level=info msg="StartContainer for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\"" Sep 4 17:36:28.132991 systemd[1]: Started cri-containerd-e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0.scope - libcontainer container e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0. Sep 4 17:36:28.156210 containerd[1433]: time="2024-09-04T17:36:28.156159145Z" level=info msg="StartContainer for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" returns successfully" Sep 4 17:36:28.600047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850-rootfs.mount: Deactivated successfully. Sep 4 17:36:28.653282 kubelet[2526]: E0904 17:36:28.653245 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:28.657456 kubelet[2526]: E0904 17:36:28.657387 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:28.660828 containerd[1433]: time="2024-09-04T17:36:28.659327632Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:36:28.666423 kubelet[2526]: I0904 17:36:28.666379 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-q2tbl" podStartSLOduration=2.005936762 podStartE2EDuration="12.66633895s" podCreationTimestamp="2024-09-04 17:36:16 +0000 UTC" firstStartedPulling="2024-09-04 17:36:17.434238897 +0000 UTC m=+13.984159660" lastFinishedPulling="2024-09-04 17:36:28.094641085 +0000 UTC m=+24.644561848" observedRunningTime="2024-09-04 17:36:28.666025558 +0000 UTC m=+25.215946321" watchObservedRunningTime="2024-09-04 17:36:28.66633895 +0000 UTC m=+25.216259713" Sep 4 17:36:28.697182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83793602.mount: Deactivated successfully. Sep 4 17:36:28.702884 containerd[1433]: time="2024-09-04T17:36:28.702827127Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\"" Sep 4 17:36:28.703522 containerd[1433]: time="2024-09-04T17:36:28.703490995Z" level=info msg="StartContainer for \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\"" Sep 4 17:36:28.740027 systemd[1]: Started cri-containerd-903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20.scope - libcontainer container 903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20. Sep 4 17:36:28.786757 containerd[1433]: time="2024-09-04T17:36:28.786705957Z" level=info msg="StartContainer for \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\" returns successfully" Sep 4 17:36:28.830484 systemd[1]: cri-containerd-903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20.scope: Deactivated successfully. Sep 4 17:36:28.921471 containerd[1433]: time="2024-09-04T17:36:28.920971426Z" level=info msg="shim disconnected" id=903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20 namespace=k8s.io Sep 4 17:36:28.921471 containerd[1433]: time="2024-09-04T17:36:28.921054195Z" level=warning msg="cleaning up after shim disconnected" id=903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20 namespace=k8s.io Sep 4 17:36:28.921471 containerd[1433]: time="2024-09-04T17:36:28.921064036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:36:29.599180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20-rootfs.mount: Deactivated successfully. Sep 4 17:36:29.662523 kubelet[2526]: E0904 17:36:29.661808 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:29.666145 kubelet[2526]: E0904 17:36:29.666116 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:29.668738 containerd[1433]: time="2024-09-04T17:36:29.668700281Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:36:29.684618 containerd[1433]: time="2024-09-04T17:36:29.684572326Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\"" Sep 4 17:36:29.685113 containerd[1433]: time="2024-09-04T17:36:29.685036211Z" level=info msg="StartContainer for \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\"" Sep 4 17:36:29.732046 systemd[1]: Started cri-containerd-a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4.scope - libcontainer container a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4. Sep 4 17:36:29.763871 systemd[1]: cri-containerd-a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4.scope: Deactivated successfully. Sep 4 17:36:29.766270 containerd[1433]: time="2024-09-04T17:36:29.766231895Z" level=info msg="StartContainer for \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\" returns successfully" Sep 4 17:36:29.797382 containerd[1433]: time="2024-09-04T17:36:29.797311639Z" level=info msg="shim disconnected" id=a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4 namespace=k8s.io Sep 4 17:36:29.797382 containerd[1433]: time="2024-09-04T17:36:29.797372605Z" level=warning msg="cleaning up after shim disconnected" id=a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4 namespace=k8s.io Sep 4 17:36:29.797382 containerd[1433]: time="2024-09-04T17:36:29.797382606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:36:30.599299 systemd[1]: run-containerd-runc-k8s.io-a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4-runc.woEXuF.mount: Deactivated successfully. Sep 4 17:36:30.599393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4-rootfs.mount: Deactivated successfully. Sep 4 17:36:30.665825 kubelet[2526]: E0904 17:36:30.664986 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:30.667741 containerd[1433]: time="2024-09-04T17:36:30.667707160Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:36:30.689108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937075571.mount: Deactivated successfully. Sep 4 17:36:30.692534 containerd[1433]: time="2024-09-04T17:36:30.692474993Z" level=info msg="CreateContainer within sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\"" Sep 4 17:36:30.698951 containerd[1433]: time="2024-09-04T17:36:30.698898763Z" level=info msg="StartContainer for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\"" Sep 4 17:36:30.730038 systemd[1]: Started cri-containerd-c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584.scope - libcontainer container c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584. Sep 4 17:36:30.760654 containerd[1433]: time="2024-09-04T17:36:30.760601303Z" level=info msg="StartContainer for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" returns successfully" Sep 4 17:36:30.863312 kubelet[2526]: I0904 17:36:30.862909 2526 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:36:30.888074 kubelet[2526]: I0904 17:36:30.888034 2526 topology_manager.go:215] "Topology Admit Handler" podUID="52727563-c31a-4bb6-afcb-d2db1facb474" podNamespace="kube-system" podName="coredns-76f75df574-6z8jh" Sep 4 17:36:30.888263 kubelet[2526]: I0904 17:36:30.888247 2526 topology_manager.go:215] "Topology Admit Handler" podUID="a880f862-5c06-4d0e-ba9f-7a6984005b92" podNamespace="kube-system" podName="coredns-76f75df574-vc2hk" Sep 4 17:36:30.895758 systemd[1]: Created slice kubepods-burstable-pod52727563_c31a_4bb6_afcb_d2db1facb474.slice - libcontainer container kubepods-burstable-pod52727563_c31a_4bb6_afcb_d2db1facb474.slice. Sep 4 17:36:30.908893 systemd[1]: Created slice kubepods-burstable-poda880f862_5c06_4d0e_ba9f_7a6984005b92.slice - libcontainer container kubepods-burstable-poda880f862_5c06_4d0e_ba9f_7a6984005b92.slice. Sep 4 17:36:30.928981 kubelet[2526]: I0904 17:36:30.928934 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a880f862-5c06-4d0e-ba9f-7a6984005b92-config-volume\") pod \"coredns-76f75df574-vc2hk\" (UID: \"a880f862-5c06-4d0e-ba9f-7a6984005b92\") " pod="kube-system/coredns-76f75df574-vc2hk" Sep 4 17:36:30.928981 kubelet[2526]: I0904 17:36:30.928989 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52727563-c31a-4bb6-afcb-d2db1facb474-config-volume\") pod \"coredns-76f75df574-6z8jh\" (UID: \"52727563-c31a-4bb6-afcb-d2db1facb474\") " pod="kube-system/coredns-76f75df574-6z8jh" Sep 4 17:36:30.929251 kubelet[2526]: I0904 17:36:30.929015 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgcbm\" (UniqueName: \"kubernetes.io/projected/52727563-c31a-4bb6-afcb-d2db1facb474-kube-api-access-mgcbm\") pod \"coredns-76f75df574-6z8jh\" (UID: \"52727563-c31a-4bb6-afcb-d2db1facb474\") " pod="kube-system/coredns-76f75df574-6z8jh" Sep 4 17:36:30.929251 kubelet[2526]: I0904 17:36:30.929079 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wrh\" (UniqueName: \"kubernetes.io/projected/a880f862-5c06-4d0e-ba9f-7a6984005b92-kube-api-access-68wrh\") pod \"coredns-76f75df574-vc2hk\" (UID: \"a880f862-5c06-4d0e-ba9f-7a6984005b92\") " pod="kube-system/coredns-76f75df574-vc2hk" Sep 4 17:36:31.199754 kubelet[2526]: E0904 17:36:31.199183 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:31.201147 containerd[1433]: time="2024-09-04T17:36:31.201096546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6z8jh,Uid:52727563-c31a-4bb6-afcb-d2db1facb474,Namespace:kube-system,Attempt:0,}" Sep 4 17:36:31.211306 kubelet[2526]: E0904 17:36:31.211269 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:31.211755 containerd[1433]: time="2024-09-04T17:36:31.211718279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vc2hk,Uid:a880f862-5c06-4d0e-ba9f-7a6984005b92,Namespace:kube-system,Attempt:0,}" Sep 4 17:36:31.669703 kubelet[2526]: E0904 17:36:31.669673 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:31.688833 kubelet[2526]: I0904 17:36:31.686669 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lngm5" podStartSLOduration=6.512264318 podStartE2EDuration="15.686625345s" podCreationTimestamp="2024-09-04 17:36:16 +0000 UTC" firstStartedPulling="2024-09-04 17:36:17.396728197 +0000 UTC m=+13.946648920" lastFinishedPulling="2024-09-04 17:36:26.571089144 +0000 UTC m=+23.121009947" observedRunningTime="2024-09-04 17:36:31.686090576 +0000 UTC m=+28.236011339" watchObservedRunningTime="2024-09-04 17:36:31.686625345 +0000 UTC m=+28.236546108" Sep 4 17:36:32.671219 kubelet[2526]: E0904 17:36:32.671183 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:32.995396 systemd-networkd[1373]: cilium_host: Link UP Sep 4 17:36:32.996330 systemd-networkd[1373]: cilium_net: Link UP Sep 4 17:36:33.005596 systemd-networkd[1373]: cilium_net: Gained carrier Sep 4 17:36:33.005734 systemd-networkd[1373]: cilium_host: Gained carrier Sep 4 17:36:33.005851 systemd-networkd[1373]: cilium_net: Gained IPv6LL Sep 4 17:36:33.005969 systemd-networkd[1373]: cilium_host: Gained IPv6LL Sep 4 17:36:33.093501 systemd-networkd[1373]: cilium_vxlan: Link UP Sep 4 17:36:33.093509 systemd-networkd[1373]: cilium_vxlan: Gained carrier Sep 4 17:36:33.421827 kernel: NET: Registered PF_ALG protocol family Sep 4 17:36:33.674621 kubelet[2526]: E0904 17:36:33.673048 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:34.056728 systemd-networkd[1373]: lxc_health: Link UP Sep 4 17:36:34.063552 systemd-networkd[1373]: lxc_health: Gained carrier Sep 4 17:36:34.311928 systemd-networkd[1373]: lxc2d971eb45de6: Link UP Sep 4 17:36:34.317825 kernel: eth0: renamed from tmp8b21a Sep 4 17:36:34.326331 systemd-networkd[1373]: lxc2d971eb45de6: Gained carrier Sep 4 17:36:34.338733 kernel: eth0: renamed from tmpece83 Sep 4 17:36:34.346044 systemd-networkd[1373]: lxc273f8904e88d: Link UP Sep 4 17:36:34.347928 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Sep 4 17:36:34.348431 systemd-networkd[1373]: lxc273f8904e88d: Gained carrier Sep 4 17:36:35.234948 systemd-networkd[1373]: lxc_health: Gained IPv6LL Sep 4 17:36:35.299939 kubelet[2526]: E0904 17:36:35.299513 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:35.745921 systemd-networkd[1373]: lxc273f8904e88d: Gained IPv6LL Sep 4 17:36:36.065943 systemd-networkd[1373]: lxc2d971eb45de6: Gained IPv6LL Sep 4 17:36:37.892756 containerd[1433]: time="2024-09-04T17:36:37.892655637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:37.892756 containerd[1433]: time="2024-09-04T17:36:37.892778846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:37.893387 containerd[1433]: time="2024-09-04T17:36:37.892832890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:37.893387 containerd[1433]: time="2024-09-04T17:36:37.892863733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:37.893974 containerd[1433]: time="2024-09-04T17:36:37.893891850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:37.894477 containerd[1433]: time="2024-09-04T17:36:37.893948614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:37.894647 containerd[1433]: time="2024-09-04T17:36:37.894458573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:37.894647 containerd[1433]: time="2024-09-04T17:36:37.894610264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:37.927039 systemd[1]: Started cri-containerd-8b21a593a26785aa38d3339f58bfd2a7538bd89eb433225795696af7492bac30.scope - libcontainer container 8b21a593a26785aa38d3339f58bfd2a7538bd89eb433225795696af7492bac30. Sep 4 17:36:37.929064 systemd[1]: Started cri-containerd-ece832c1c6fbfaeb546444313dfca5667a0ac34e957b1b0d215a7d76cf6e8566.scope - libcontainer container ece832c1c6fbfaeb546444313dfca5667a0ac34e957b1b0d215a7d76cf6e8566. Sep 4 17:36:37.937980 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:36:37.948136 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:36:37.960622 containerd[1433]: time="2024-09-04T17:36:37.960568796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6z8jh,Uid:52727563-c31a-4bb6-afcb-d2db1facb474,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b21a593a26785aa38d3339f58bfd2a7538bd89eb433225795696af7492bac30\"" Sep 4 17:36:37.961759 kubelet[2526]: E0904 17:36:37.961660 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:37.965950 containerd[1433]: time="2024-09-04T17:36:37.965888277Z" level=info msg="CreateContainer within sandbox \"8b21a593a26785aa38d3339f58bfd2a7538bd89eb433225795696af7492bac30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:36:37.977187 containerd[1433]: time="2024-09-04T17:36:37.977117004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vc2hk,Uid:a880f862-5c06-4d0e-ba9f-7a6984005b92,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece832c1c6fbfaeb546444313dfca5667a0ac34e957b1b0d215a7d76cf6e8566\"" Sep 4 17:36:37.978018 kubelet[2526]: E0904 17:36:37.977984 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:37.982014 containerd[1433]: time="2024-09-04T17:36:37.981970690Z" level=info msg="CreateContainer within sandbox \"ece832c1c6fbfaeb546444313dfca5667a0ac34e957b1b0d215a7d76cf6e8566\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:36:37.983991 containerd[1433]: time="2024-09-04T17:36:37.983949079Z" level=info msg="CreateContainer within sandbox \"8b21a593a26785aa38d3339f58bfd2a7538bd89eb433225795696af7492bac30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20768fa50cd251b7b8e6552e63d279b722b582a430c59905a3e3c24a576d1bd8\"" Sep 4 17:36:37.984942 containerd[1433]: time="2024-09-04T17:36:37.984908871Z" level=info msg="StartContainer for \"20768fa50cd251b7b8e6552e63d279b722b582a430c59905a3e3c24a576d1bd8\"" Sep 4 17:36:37.999469 containerd[1433]: time="2024-09-04T17:36:37.999415005Z" level=info msg="CreateContainer within sandbox \"ece832c1c6fbfaeb546444313dfca5667a0ac34e957b1b0d215a7d76cf6e8566\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbb7ed2a0fe09af4a32942ba75a4b0cf99e7273b4e13d68d259973c6d9e03441\"" Sep 4 17:36:38.000014 containerd[1433]: time="2024-09-04T17:36:37.999979367Z" level=info msg="StartContainer for \"fbb7ed2a0fe09af4a32942ba75a4b0cf99e7273b4e13d68d259973c6d9e03441\"" Sep 4 17:36:38.011984 systemd[1]: Started cri-containerd-20768fa50cd251b7b8e6552e63d279b722b582a430c59905a3e3c24a576d1bd8.scope - libcontainer container 20768fa50cd251b7b8e6552e63d279b722b582a430c59905a3e3c24a576d1bd8. Sep 4 17:36:38.031040 systemd[1]: Started cri-containerd-fbb7ed2a0fe09af4a32942ba75a4b0cf99e7273b4e13d68d259973c6d9e03441.scope - libcontainer container fbb7ed2a0fe09af4a32942ba75a4b0cf99e7273b4e13d68d259973c6d9e03441. Sep 4 17:36:38.091557 containerd[1433]: time="2024-09-04T17:36:38.091501912Z" level=info msg="StartContainer for \"fbb7ed2a0fe09af4a32942ba75a4b0cf99e7273b4e13d68d259973c6d9e03441\" returns successfully" Sep 4 17:36:38.091686 containerd[1433]: time="2024-09-04T17:36:38.091654483Z" level=info msg="StartContainer for \"20768fa50cd251b7b8e6552e63d279b722b582a430c59905a3e3c24a576d1bd8\" returns successfully" Sep 4 17:36:38.694194 kubelet[2526]: E0904 17:36:38.694152 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:38.695922 kubelet[2526]: E0904 17:36:38.695902 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:38.710510 kubelet[2526]: I0904 17:36:38.709358 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vc2hk" podStartSLOduration=21.709314998 podStartE2EDuration="21.709314998s" podCreationTimestamp="2024-09-04 17:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:36:38.708891167 +0000 UTC m=+35.258812010" watchObservedRunningTime="2024-09-04 17:36:38.709314998 +0000 UTC m=+35.259235721" Sep 4 17:36:38.731185 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:36018.service - OpenSSH per-connection server daemon (10.0.0.1:36018). Sep 4 17:36:38.781867 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 36018 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:38.783822 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:38.787882 systemd-logind[1413]: New session 8 of user core. Sep 4 17:36:38.795973 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:36:38.919741 sshd[3932]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:38.922537 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:36018.service: Deactivated successfully. Sep 4 17:36:38.926317 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:36:38.927925 systemd-logind[1413]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:36:38.929202 systemd-logind[1413]: Removed session 8. Sep 4 17:36:39.696403 kubelet[2526]: E0904 17:36:39.696340 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:39.696787 kubelet[2526]: E0904 17:36:39.696757 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:40.697612 kubelet[2526]: E0904 17:36:40.697494 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:40.697612 kubelet[2526]: E0904 17:36:40.697557 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:41.555701 kubelet[2526]: I0904 17:36:41.555389 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:36:41.556211 kubelet[2526]: E0904 17:36:41.556173 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:41.571537 kubelet[2526]: I0904 17:36:41.571497 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6z8jh" podStartSLOduration=25.571456433 podStartE2EDuration="25.571456433s" podCreationTimestamp="2024-09-04 17:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:36:38.734659694 +0000 UTC m=+35.284580457" watchObservedRunningTime="2024-09-04 17:36:41.571456433 +0000 UTC m=+38.121377196" Sep 4 17:36:41.701032 kubelet[2526]: E0904 17:36:41.700755 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:36:43.931980 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:48928.service - OpenSSH per-connection server daemon (10.0.0.1:48928). Sep 4 17:36:43.972900 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 48928 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:43.974272 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:43.980683 systemd-logind[1413]: New session 9 of user core. Sep 4 17:36:43.991010 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:36:44.110690 sshd[3954]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:44.115186 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:48928.service: Deactivated successfully. Sep 4 17:36:44.116890 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:36:44.117499 systemd-logind[1413]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:36:44.118252 systemd-logind[1413]: Removed session 9. Sep 4 17:36:49.121474 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:49050.service - OpenSSH per-connection server daemon (10.0.0.1:49050). Sep 4 17:36:49.160476 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 49050 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:49.161829 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:49.165603 systemd-logind[1413]: New session 10 of user core. Sep 4 17:36:49.175971 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:36:49.284605 sshd[3971]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:49.289259 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:49050.service: Deactivated successfully. Sep 4 17:36:49.290912 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:36:49.292192 systemd-logind[1413]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:36:49.298543 systemd-logind[1413]: Removed session 10. Sep 4 17:36:54.298938 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:56806.service - OpenSSH per-connection server daemon (10.0.0.1:56806). Sep 4 17:36:54.344313 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 56806 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:54.345632 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:54.350606 systemd-logind[1413]: New session 11 of user core. Sep 4 17:36:54.362018 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:36:54.484137 sshd[3987]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:54.494976 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:56806.service: Deactivated successfully. Sep 4 17:36:54.496956 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:36:54.498530 systemd-logind[1413]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:36:54.509803 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:56822.service - OpenSSH per-connection server daemon (10.0.0.1:56822). Sep 4 17:36:54.513117 systemd-logind[1413]: Removed session 11. Sep 4 17:36:54.546173 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 56822 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:54.547592 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:54.551875 systemd-logind[1413]: New session 12 of user core. Sep 4 17:36:54.563016 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:36:54.733995 sshd[4002]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:54.744296 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:56822.service: Deactivated successfully. Sep 4 17:36:54.751295 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:36:54.753586 systemd-logind[1413]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:36:54.765665 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:56830.service - OpenSSH per-connection server daemon (10.0.0.1:56830). Sep 4 17:36:54.766735 systemd-logind[1413]: Removed session 12. Sep 4 17:36:54.805442 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 56830 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:54.806843 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:54.811413 systemd-logind[1413]: New session 13 of user core. Sep 4 17:36:54.821025 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:36:54.937923 sshd[4015]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:54.941623 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:56830.service: Deactivated successfully. Sep 4 17:36:54.943627 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:36:54.945441 systemd-logind[1413]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:36:54.947056 systemd-logind[1413]: Removed session 13. Sep 4 17:36:59.953907 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:56842.service - OpenSSH per-connection server daemon (10.0.0.1:56842). Sep 4 17:36:59.994403 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 56842 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:36:59.996136 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:00.002748 systemd-logind[1413]: New session 14 of user core. Sep 4 17:37:00.015505 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:37:00.140524 sshd[4029]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:00.144367 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:56842.service: Deactivated successfully. Sep 4 17:37:00.147407 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:37:00.148236 systemd-logind[1413]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:37:00.149673 systemd-logind[1413]: Removed session 14. Sep 4 17:37:05.151899 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:37822.service - OpenSSH per-connection server daemon (10.0.0.1:37822). Sep 4 17:37:05.192187 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 37822 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:05.193656 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:05.198051 systemd-logind[1413]: New session 15 of user core. Sep 4 17:37:05.209997 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:37:05.323688 sshd[4045]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:05.340636 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:37822.service: Deactivated successfully. Sep 4 17:37:05.342585 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:37:05.345809 systemd-logind[1413]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:37:05.347297 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:37838.service - OpenSSH per-connection server daemon (10.0.0.1:37838). Sep 4 17:37:05.348200 systemd-logind[1413]: Removed session 15. Sep 4 17:37:05.397773 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 37838 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:05.399196 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:05.403525 systemd-logind[1413]: New session 16 of user core. Sep 4 17:37:05.417998 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:37:05.731835 sshd[4059]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:05.745569 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:37838.service: Deactivated successfully. Sep 4 17:37:05.747930 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:37:05.752103 systemd-logind[1413]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:37:05.758219 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:37850.service - OpenSSH per-connection server daemon (10.0.0.1:37850). Sep 4 17:37:05.761242 systemd-logind[1413]: Removed session 16. Sep 4 17:37:05.797567 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 37850 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:05.799498 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:05.803769 systemd-logind[1413]: New session 17 of user core. Sep 4 17:37:05.812979 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:37:07.086415 sshd[4071]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:07.097912 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:37850.service: Deactivated successfully. Sep 4 17:37:07.104387 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:37:07.106505 systemd-logind[1413]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:37:07.115175 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:37854.service - OpenSSH per-connection server daemon (10.0.0.1:37854). Sep 4 17:37:07.116706 systemd-logind[1413]: Removed session 17. Sep 4 17:37:07.158657 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 37854 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:07.160296 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:07.164498 systemd-logind[1413]: New session 18 of user core. Sep 4 17:37:07.170971 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:37:07.403737 sshd[4091]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:07.411937 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:37854.service: Deactivated successfully. Sep 4 17:37:07.413514 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:37:07.415386 systemd-logind[1413]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:37:07.424133 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:37864.service - OpenSSH per-connection server daemon (10.0.0.1:37864). Sep 4 17:37:07.425318 systemd-logind[1413]: Removed session 18. Sep 4 17:37:07.463014 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 37864 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:07.464914 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:07.473475 systemd-logind[1413]: New session 19 of user core. Sep 4 17:37:07.483144 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:37:07.611385 sshd[4103]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:07.616100 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:37864.service: Deactivated successfully. Sep 4 17:37:07.617999 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:37:07.618641 systemd-logind[1413]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:37:07.619634 systemd-logind[1413]: Removed session 19. Sep 4 17:37:12.624310 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:58810.service - OpenSSH per-connection server daemon (10.0.0.1:58810). Sep 4 17:37:12.665908 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 58810 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:12.667335 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:12.671311 systemd-logind[1413]: New session 20 of user core. Sep 4 17:37:12.679970 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:37:12.799608 sshd[4120]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:12.803523 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:58810.service: Deactivated successfully. Sep 4 17:37:12.805542 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:37:12.806498 systemd-logind[1413]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:37:12.807982 systemd-logind[1413]: Removed session 20. Sep 4 17:37:17.566680 kubelet[2526]: E0904 17:37:17.566630 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:17.815189 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:58820.service - OpenSSH per-connection server daemon (10.0.0.1:58820). Sep 4 17:37:17.859815 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 58820 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:17.861144 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:17.865458 systemd-logind[1413]: New session 21 of user core. Sep 4 17:37:17.876967 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:37:17.985827 sshd[4136]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:17.988346 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:58820.service: Deactivated successfully. Sep 4 17:37:17.989999 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:37:17.992029 systemd-logind[1413]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:37:17.992890 systemd-logind[1413]: Removed session 21. Sep 4 17:37:22.999123 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:49522.service - OpenSSH per-connection server daemon (10.0.0.1:49522). Sep 4 17:37:23.041424 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 49522 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:23.043064 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:23.048186 systemd-logind[1413]: New session 22 of user core. Sep 4 17:37:23.057184 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:37:23.168690 sshd[4150]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:23.171555 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:49522.service: Deactivated successfully. Sep 4 17:37:23.173586 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:37:23.178763 systemd-logind[1413]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:37:23.180205 systemd-logind[1413]: Removed session 22. Sep 4 17:37:23.568557 kubelet[2526]: E0904 17:37:23.568097 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:28.180161 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Sep 4 17:37:28.221620 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:28.222200 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:28.226823 systemd-logind[1413]: New session 23 of user core. Sep 4 17:37:28.235977 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:37:28.344538 sshd[4164]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:28.352332 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:49528.service: Deactivated successfully. Sep 4 17:37:28.355892 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:37:28.357183 systemd-logind[1413]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:37:28.370214 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:49530.service - OpenSSH per-connection server daemon (10.0.0.1:49530). Sep 4 17:37:28.371093 systemd-logind[1413]: Removed session 23. Sep 4 17:37:28.405345 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 49530 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:28.406560 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:28.410331 systemd-logind[1413]: New session 24 of user core. Sep 4 17:37:28.421037 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:37:30.438108 containerd[1433]: time="2024-09-04T17:37:30.438054328Z" level=info msg="StopContainer for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" with timeout 30 (s)" Sep 4 17:37:30.443978 containerd[1433]: time="2024-09-04T17:37:30.443910706Z" level=info msg="Stop container \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" with signal terminated" Sep 4 17:37:30.458308 systemd[1]: cri-containerd-e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0.scope: Deactivated successfully. Sep 4 17:37:30.465788 containerd[1433]: time="2024-09-04T17:37:30.465723761Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:37:30.476707 containerd[1433]: time="2024-09-04T17:37:30.476662189Z" level=info msg="StopContainer for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" with timeout 2 (s)" Sep 4 17:37:30.477150 containerd[1433]: time="2024-09-04T17:37:30.477029953Z" level=info msg="Stop container \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" with signal terminated" Sep 4 17:37:30.487160 systemd-networkd[1373]: lxc_health: Link DOWN Sep 4 17:37:30.487167 systemd-networkd[1373]: lxc_health: Lost carrier Sep 4 17:37:30.491652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0-rootfs.mount: Deactivated successfully. Sep 4 17:37:30.502383 containerd[1433]: time="2024-09-04T17:37:30.502318122Z" level=info msg="shim disconnected" id=e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0 namespace=k8s.io Sep 4 17:37:30.502383 containerd[1433]: time="2024-09-04T17:37:30.502376442Z" level=warning msg="cleaning up after shim disconnected" id=e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0 namespace=k8s.io Sep 4 17:37:30.502383 containerd[1433]: time="2024-09-04T17:37:30.502384803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:30.507160 systemd[1]: cri-containerd-c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584.scope: Deactivated successfully. Sep 4 17:37:30.507689 systemd[1]: cri-containerd-c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584.scope: Consumed 6.703s CPU time. Sep 4 17:37:30.526327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584-rootfs.mount: Deactivated successfully. Sep 4 17:37:30.531996 containerd[1433]: time="2024-09-04T17:37:30.531929654Z" level=info msg="shim disconnected" id=c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584 namespace=k8s.io Sep 4 17:37:30.531996 containerd[1433]: time="2024-09-04T17:37:30.531992134Z" level=warning msg="cleaning up after shim disconnected" id=c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584 namespace=k8s.io Sep 4 17:37:30.531996 containerd[1433]: time="2024-09-04T17:37:30.532000935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:30.542330 containerd[1433]: time="2024-09-04T17:37:30.542175315Z" level=info msg="StopContainer for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" returns successfully" Sep 4 17:37:30.545622 containerd[1433]: time="2024-09-04T17:37:30.545514268Z" level=info msg="StopContainer for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" returns successfully" Sep 4 17:37:30.546178 containerd[1433]: time="2024-09-04T17:37:30.546019593Z" level=info msg="StopPodSandbox for \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\"" Sep 4 17:37:30.546870 containerd[1433]: time="2024-09-04T17:37:30.546835041Z" level=info msg="StopPodSandbox for \"76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3\"" Sep 4 17:37:30.557620 containerd[1433]: time="2024-09-04T17:37:30.551033242Z" level=info msg="Container to stop \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:37:30.557984 containerd[1433]: time="2024-09-04T17:37:30.553870670Z" level=info msg="Container to stop \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:37:30.557984 containerd[1433]: time="2024-09-04T17:37:30.557875750Z" level=info msg="Container to stop \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:37:30.557984 containerd[1433]: time="2024-09-04T17:37:30.557889430Z" level=info msg="Container to stop \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:37:30.557984 containerd[1433]: time="2024-09-04T17:37:30.557899230Z" level=info msg="Container to stop \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:37:30.557984 containerd[1433]: time="2024-09-04T17:37:30.557909630Z" level=info msg="Container to stop \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:37:30.559448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3-shm.mount: Deactivated successfully. Sep 4 17:37:30.559557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56-shm.mount: Deactivated successfully. Sep 4 17:37:30.563513 systemd[1]: cri-containerd-30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56.scope: Deactivated successfully. Sep 4 17:37:30.566817 kubelet[2526]: E0904 17:37:30.566723 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:30.584122 systemd[1]: cri-containerd-76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3.scope: Deactivated successfully. Sep 4 17:37:30.596645 containerd[1433]: time="2024-09-04T17:37:30.596407610Z" level=info msg="shim disconnected" id=30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56 namespace=k8s.io Sep 4 17:37:30.596645 containerd[1433]: time="2024-09-04T17:37:30.596461770Z" level=warning msg="cleaning up after shim disconnected" id=30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56 namespace=k8s.io Sep 4 17:37:30.596645 containerd[1433]: time="2024-09-04T17:37:30.596469850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:30.615701 containerd[1433]: time="2024-09-04T17:37:30.615656479Z" level=info msg="TearDown network for sandbox \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" successfully" Sep 4 17:37:30.615701 containerd[1433]: time="2024-09-04T17:37:30.615695840Z" level=info msg="StopPodSandbox for \"30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56\" returns successfully" Sep 4 17:37:30.622513 containerd[1433]: time="2024-09-04T17:37:30.622453346Z" level=info msg="shim disconnected" id=76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3 namespace=k8s.io Sep 4 17:37:30.622828 containerd[1433]: time="2024-09-04T17:37:30.622617828Z" level=warning msg="cleaning up after shim disconnected" id=76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3 namespace=k8s.io Sep 4 17:37:30.622828 containerd[1433]: time="2024-09-04T17:37:30.622632748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:30.642894 containerd[1433]: time="2024-09-04T17:37:30.642842467Z" level=info msg="TearDown network for sandbox \"76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3\" successfully" Sep 4 17:37:30.642894 containerd[1433]: time="2024-09-04T17:37:30.642878468Z" level=info msg="StopPodSandbox for \"76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3\" returns successfully" Sep 4 17:37:30.801734 kubelet[2526]: I0904 17:37:30.801461 2526 scope.go:117] "RemoveContainer" containerID="e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0" Sep 4 17:37:30.803835 containerd[1433]: time="2024-09-04T17:37:30.803421531Z" level=info msg="RemoveContainer for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\"" Sep 4 17:37:30.809015 containerd[1433]: time="2024-09-04T17:37:30.808905465Z" level=info msg="RemoveContainer for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" returns successfully" Sep 4 17:37:30.809260 kubelet[2526]: I0904 17:37:30.809235 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-xtables-lock\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809323 kubelet[2526]: I0904 17:37:30.809275 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-hubble-tls\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809323 kubelet[2526]: I0904 17:37:30.809294 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-net\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809323 kubelet[2526]: I0904 17:37:30.809315 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-kernel\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809399 kubelet[2526]: I0904 17:37:30.809333 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cni-path\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809399 kubelet[2526]: I0904 17:37:30.809350 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-bpf-maps\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809399 kubelet[2526]: I0904 17:37:30.809372 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4b7c297-d666-405a-b774-78173dbe9e3b-clustermesh-secrets\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809399 kubelet[2526]: I0904 17:37:30.809391 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-config-path\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809486 kubelet[2526]: I0904 17:37:30.809410 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0743c9b6-754f-49a3-905c-9101a0da0546-cilium-config-path\") pod \"0743c9b6-754f-49a3-905c-9101a0da0546\" (UID: \"0743c9b6-754f-49a3-905c-9101a0da0546\") " Sep 4 17:37:30.809486 kubelet[2526]: I0904 17:37:30.809431 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhjmc\" (UniqueName: \"kubernetes.io/projected/0743c9b6-754f-49a3-905c-9101a0da0546-kube-api-access-bhjmc\") pod \"0743c9b6-754f-49a3-905c-9101a0da0546\" (UID: \"0743c9b6-754f-49a3-905c-9101a0da0546\") " Sep 4 17:37:30.809486 kubelet[2526]: I0904 17:37:30.809473 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5tcn\" (UniqueName: \"kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-kube-api-access-w5tcn\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809547 kubelet[2526]: I0904 17:37:30.809490 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-etc-cni-netd\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809547 kubelet[2526]: I0904 17:37:30.809506 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-hostproc\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809547 kubelet[2526]: I0904 17:37:30.809523 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-cgroup\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809547 kubelet[2526]: I0904 17:37:30.809542 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-run\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.809631 kubelet[2526]: I0904 17:37:30.809558 2526 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-lib-modules\") pod \"b4b7c297-d666-405a-b774-78173dbe9e3b\" (UID: \"b4b7c297-d666-405a-b774-78173dbe9e3b\") " Sep 4 17:37:30.811179 kubelet[2526]: I0904 17:37:30.809925 2526 scope.go:117] "RemoveContainer" containerID="e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0" Sep 4 17:37:30.815201 kubelet[2526]: I0904 17:37:30.815037 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.816551 kubelet[2526]: I0904 17:37:30.816383 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.816551 kubelet[2526]: I0904 17:37:30.816462 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.816551 kubelet[2526]: I0904 17:37:30.816525 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.817003 kubelet[2526]: I0904 17:37:30.816715 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.817003 kubelet[2526]: I0904 17:37:30.816969 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0743c9b6-754f-49a3-905c-9101a0da0546-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0743c9b6-754f-49a3-905c-9101a0da0546" (UID: "0743c9b6-754f-49a3-905c-9101a0da0546"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:37:30.817184 kubelet[2526]: I0904 17:37:30.817018 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.817184 kubelet[2526]: I0904 17:37:30.817037 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.817184 kubelet[2526]: I0904 17:37:30.817070 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.817184 kubelet[2526]: I0904 17:37:30.817092 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.817184 kubelet[2526]: I0904 17:37:30.817116 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:37:30.818219 containerd[1433]: time="2024-09-04T17:37:30.810161517Z" level=error msg="ContainerStatus for \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\": not found" Sep 4 17:37:30.819166 kubelet[2526]: I0904 17:37:30.819132 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-kube-api-access-w5tcn" (OuterVolumeSpecName: "kube-api-access-w5tcn") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "kube-api-access-w5tcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:37:30.819533 kubelet[2526]: I0904 17:37:30.819384 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b7c297-d666-405a-b774-78173dbe9e3b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:37:30.820242 kubelet[2526]: I0904 17:37:30.820211 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:37:30.821643 kubelet[2526]: I0904 17:37:30.821559 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0743c9b6-754f-49a3-905c-9101a0da0546-kube-api-access-bhjmc" (OuterVolumeSpecName: "kube-api-access-bhjmc") pod "0743c9b6-754f-49a3-905c-9101a0da0546" (UID: "0743c9b6-754f-49a3-905c-9101a0da0546"). InnerVolumeSpecName "kube-api-access-bhjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:37:30.823168 kubelet[2526]: E0904 17:37:30.822994 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\": not found" containerID="e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0" Sep 4 17:37:30.825252 kubelet[2526]: I0904 17:37:30.825205 2526 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4b7c297-d666-405a-b774-78173dbe9e3b" (UID: "b4b7c297-d666-405a-b774-78173dbe9e3b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:37:30.827268 kubelet[2526]: I0904 17:37:30.827222 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0"} err="failed to get container status \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6590bddf073d0d8f3c976208ee5dea09368fb52d9fbd015993d3b36ae7f16c0\": not found" Sep 4 17:37:30.827268 kubelet[2526]: I0904 17:37:30.827266 2526 scope.go:117] "RemoveContainer" containerID="c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584" Sep 4 17:37:30.828462 containerd[1433]: time="2024-09-04T17:37:30.828425257Z" level=info msg="RemoveContainer for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\"" Sep 4 17:37:30.831149 containerd[1433]: time="2024-09-04T17:37:30.831112164Z" level=info msg="RemoveContainer for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" returns successfully" Sep 4 17:37:30.831386 kubelet[2526]: I0904 17:37:30.831290 2526 scope.go:117] "RemoveContainer" containerID="a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4" Sep 4 17:37:30.833575 containerd[1433]: time="2024-09-04T17:37:30.833542588Z" level=info msg="RemoveContainer for \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\"" Sep 4 17:37:30.837491 containerd[1433]: time="2024-09-04T17:37:30.837458666Z" level=info msg="RemoveContainer for \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\" returns successfully" Sep 4 17:37:30.837728 kubelet[2526]: I0904 17:37:30.837657 2526 scope.go:117] "RemoveContainer" containerID="903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20" Sep 4 17:37:30.838759 containerd[1433]: time="2024-09-04T17:37:30.838511717Z" level=info msg="RemoveContainer for \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\"" Sep 4 17:37:30.840709 containerd[1433]: time="2024-09-04T17:37:30.840673738Z" level=info msg="RemoveContainer for \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\" returns successfully" Sep 4 17:37:30.841091 kubelet[2526]: I0904 17:37:30.840977 2526 scope.go:117] "RemoveContainer" containerID="ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850" Sep 4 17:37:30.841810 containerd[1433]: time="2024-09-04T17:37:30.841772629Z" level=info msg="RemoveContainer for \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\"" Sep 4 17:37:30.844005 containerd[1433]: time="2024-09-04T17:37:30.843959930Z" level=info msg="RemoveContainer for \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\" returns successfully" Sep 4 17:37:30.844198 kubelet[2526]: I0904 17:37:30.844143 2526 scope.go:117] "RemoveContainer" containerID="01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4" Sep 4 17:37:30.845201 containerd[1433]: time="2024-09-04T17:37:30.845139782Z" level=info msg="RemoveContainer for \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\"" Sep 4 17:37:30.847447 containerd[1433]: time="2024-09-04T17:37:30.847413884Z" level=info msg="RemoveContainer for \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\" returns successfully" Sep 4 17:37:30.847712 kubelet[2526]: I0904 17:37:30.847613 2526 scope.go:117] "RemoveContainer" containerID="c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584" Sep 4 17:37:30.847816 containerd[1433]: time="2024-09-04T17:37:30.847775088Z" level=error msg="ContainerStatus for \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\": not found" Sep 4 17:37:30.847921 kubelet[2526]: E0904 17:37:30.847901 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\": not found" containerID="c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584" Sep 4 17:37:30.847954 kubelet[2526]: I0904 17:37:30.847943 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584"} err="failed to get container status \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\": rpc error: code = NotFound desc = an error occurred when try to find container \"c803ff9832a703aeabc3c1d84807c7d1b9d56c1448d3d6104da3115440118584\": not found" Sep 4 17:37:30.847985 kubelet[2526]: I0904 17:37:30.847957 2526 scope.go:117] "RemoveContainer" containerID="a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4" Sep 4 17:37:30.848159 containerd[1433]: time="2024-09-04T17:37:30.848127772Z" level=error msg="ContainerStatus for \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\": not found" Sep 4 17:37:30.848247 kubelet[2526]: E0904 17:37:30.848233 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\": not found" containerID="a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4" Sep 4 17:37:30.848281 kubelet[2526]: I0904 17:37:30.848258 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4"} err="failed to get container status \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a86011abc3f96e191baf7120773965d6758fe0d2458a3f01a90e52a19a9aeaf4\": not found" Sep 4 17:37:30.848281 kubelet[2526]: I0904 17:37:30.848267 2526 scope.go:117] "RemoveContainer" containerID="903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20" Sep 4 17:37:30.848398 containerd[1433]: time="2024-09-04T17:37:30.848372254Z" level=error msg="ContainerStatus for \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\": not found" Sep 4 17:37:30.848471 kubelet[2526]: E0904 17:37:30.848458 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\": not found" containerID="903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20" Sep 4 17:37:30.848507 kubelet[2526]: I0904 17:37:30.848497 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20"} err="failed to get container status \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\": rpc error: code = NotFound desc = an error occurred when try to find container \"903e8ce81e700c8313b9bb760ecf39568e38f397ea33bdc6a9132c9078a28d20\": not found" Sep 4 17:37:30.848507 kubelet[2526]: I0904 17:37:30.848507 2526 scope.go:117] "RemoveContainer" containerID="ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850" Sep 4 17:37:30.848658 containerd[1433]: time="2024-09-04T17:37:30.848632976Z" level=error msg="ContainerStatus for \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\": not found" Sep 4 17:37:30.848742 kubelet[2526]: E0904 17:37:30.848724 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\": not found" containerID="ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850" Sep 4 17:37:30.848778 kubelet[2526]: I0904 17:37:30.848752 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850"} err="failed to get container status \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec3461bcec38473619e7c59545d033a789707e76105b1c7ccbe7aa4c0b2bc850\": not found" Sep 4 17:37:30.848778 kubelet[2526]: I0904 17:37:30.848763 2526 scope.go:117] "RemoveContainer" containerID="01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4" Sep 4 17:37:30.849072 containerd[1433]: time="2024-09-04T17:37:30.849033460Z" level=error msg="ContainerStatus for \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\": not found" Sep 4 17:37:30.849189 kubelet[2526]: E0904 17:37:30.849146 2526 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\": not found" containerID="01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4" Sep 4 17:37:30.849189 kubelet[2526]: I0904 17:37:30.849173 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4"} err="failed to get container status \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"01311509e15ce855ca6688ec9e6ef4456a3bf7ea805986a6165d5c9a5bd3c5b4\": not found" Sep 4 17:37:30.910472 kubelet[2526]: I0904 17:37:30.910419 2526 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910472 kubelet[2526]: I0904 17:37:30.910455 2526 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910472 kubelet[2526]: I0904 17:37:30.910468 2526 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4b7c297-d666-405a-b774-78173dbe9e3b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910488 2526 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910498 2526 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910507 2526 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0743c9b6-754f-49a3-905c-9101a0da0546-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910517 2526 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bhjmc\" (UniqueName: \"kubernetes.io/projected/0743c9b6-754f-49a3-905c-9101a0da0546-kube-api-access-bhjmc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910529 2526 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910540 2526 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910555 2526 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910621 kubelet[2526]: I0904 17:37:30.910566 2526 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910820 kubelet[2526]: I0904 17:37:30.910575 2526 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w5tcn\" (UniqueName: \"kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-kube-api-access-w5tcn\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910820 kubelet[2526]: I0904 17:37:30.910584 2526 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910820 kubelet[2526]: I0904 17:37:30.910594 2526 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910820 kubelet[2526]: I0904 17:37:30.910603 2526 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4b7c297-d666-405a-b774-78173dbe9e3b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:30.910820 kubelet[2526]: I0904 17:37:30.910612 2526 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4b7c297-d666-405a-b774-78173dbe9e3b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:37:31.105844 systemd[1]: Removed slice kubepods-besteffort-pod0743c9b6_754f_49a3_905c_9101a0da0546.slice - libcontainer container kubepods-besteffort-pod0743c9b6_754f_49a3_905c_9101a0da0546.slice. Sep 4 17:37:31.110497 systemd[1]: Removed slice kubepods-burstable-podb4b7c297_d666_405a_b774_78173dbe9e3b.slice - libcontainer container kubepods-burstable-podb4b7c297_d666_405a_b774_78173dbe9e3b.slice. Sep 4 17:37:31.110599 systemd[1]: kubepods-burstable-podb4b7c297_d666_405a_b774_78173dbe9e3b.slice: Consumed 6.940s CPU time. Sep 4 17:37:31.449695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76a4cbed47e9f75b7586668ba3f365eadc5e3ab42fd3211b4a5c4041e2fba9a3-rootfs.mount: Deactivated successfully. Sep 4 17:37:31.449803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e996a2d858acc0e945e2a7abcf094533e0ae9b7bc3e1f562e0fd4d7ccd7a56-rootfs.mount: Deactivated successfully. Sep 4 17:37:31.449854 systemd[1]: var-lib-kubelet-pods-0743c9b6\x2d754f\x2d49a3\x2d905c\x2d9101a0da0546-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhjmc.mount: Deactivated successfully. Sep 4 17:37:31.449921 systemd[1]: var-lib-kubelet-pods-b4b7c297\x2dd666\x2d405a\x2db774\x2d78173dbe9e3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5tcn.mount: Deactivated successfully. Sep 4 17:37:31.449974 systemd[1]: var-lib-kubelet-pods-b4b7c297\x2dd666\x2d405a\x2db774\x2d78173dbe9e3b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:37:31.450022 systemd[1]: var-lib-kubelet-pods-b4b7c297\x2dd666\x2d405a\x2db774\x2d78173dbe9e3b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:37:31.568381 kubelet[2526]: I0904 17:37:31.568334 2526 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0743c9b6-754f-49a3-905c-9101a0da0546" path="/var/lib/kubelet/pods/0743c9b6-754f-49a3-905c-9101a0da0546/volumes" Sep 4 17:37:31.568768 kubelet[2526]: I0904 17:37:31.568725 2526 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" path="/var/lib/kubelet/pods/b4b7c297-d666-405a-b774-78173dbe9e3b/volumes" Sep 4 17:37:32.382046 sshd[4178]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:32.394500 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:49530.service: Deactivated successfully. Sep 4 17:37:32.395969 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:37:32.396428 systemd[1]: session-24.scope: Consumed 1.330s CPU time. Sep 4 17:37:32.397840 systemd-logind[1413]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:37:32.398861 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:49532.service - OpenSSH per-connection server daemon (10.0.0.1:49532). Sep 4 17:37:32.402245 systemd-logind[1413]: Removed session 24. Sep 4 17:37:32.441117 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:32.442880 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:32.446784 systemd-logind[1413]: New session 25 of user core. Sep 4 17:37:32.454960 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:37:33.036500 sshd[4339]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:33.045531 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:49532.service: Deactivated successfully. Sep 4 17:37:33.048008 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:37:33.050391 systemd-logind[1413]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:37:33.068501 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:37914.service - OpenSSH per-connection server daemon (10.0.0.1:37914). Sep 4 17:37:33.073113 systemd-logind[1413]: Removed session 25. Sep 4 17:37:33.078427 kubelet[2526]: I0904 17:37:33.078374 2526 topology_manager.go:215] "Topology Admit Handler" podUID="59c252f7-f8f5-47d3-96fe-142964e99d96" podNamespace="kube-system" podName="cilium-jfbsz" Sep 4 17:37:33.078427 kubelet[2526]: E0904 17:37:33.078434 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" containerName="apply-sysctl-overwrites" Sep 4 17:37:33.078845 kubelet[2526]: E0904 17:37:33.078445 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0743c9b6-754f-49a3-905c-9101a0da0546" containerName="cilium-operator" Sep 4 17:37:33.078845 kubelet[2526]: E0904 17:37:33.078452 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" containerName="clean-cilium-state" Sep 4 17:37:33.078845 kubelet[2526]: E0904 17:37:33.078459 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" containerName="mount-bpf-fs" Sep 4 17:37:33.078845 kubelet[2526]: E0904 17:37:33.078465 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" containerName="cilium-agent" Sep 4 17:37:33.078845 kubelet[2526]: E0904 17:37:33.078473 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" containerName="mount-cgroup" Sep 4 17:37:33.081417 kubelet[2526]: I0904 17:37:33.080919 2526 memory_manager.go:354] "RemoveStaleState removing state" podUID="0743c9b6-754f-49a3-905c-9101a0da0546" containerName="cilium-operator" Sep 4 17:37:33.081417 kubelet[2526]: I0904 17:37:33.080958 2526 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b7c297-d666-405a-b774-78173dbe9e3b" containerName="cilium-agent" Sep 4 17:37:33.100542 systemd[1]: Created slice kubepods-burstable-pod59c252f7_f8f5_47d3_96fe_142964e99d96.slice - libcontainer container kubepods-burstable-pod59c252f7_f8f5_47d3_96fe_142964e99d96.slice. Sep 4 17:37:33.122753 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 37914 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:33.124298 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:33.125419 kubelet[2526]: I0904 17:37:33.124900 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-cni-path\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125419 kubelet[2526]: I0904 17:37:33.124944 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-etc-cni-netd\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125419 kubelet[2526]: I0904 17:37:33.124977 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-lib-modules\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125419 kubelet[2526]: I0904 17:37:33.125036 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-xtables-lock\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125419 kubelet[2526]: I0904 17:37:33.125112 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59c252f7-f8f5-47d3-96fe-142964e99d96-cilium-config-path\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125419 kubelet[2526]: I0904 17:37:33.125143 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/59c252f7-f8f5-47d3-96fe-142964e99d96-hubble-tls\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125660 kubelet[2526]: I0904 17:37:33.125190 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-bpf-maps\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125660 kubelet[2526]: I0904 17:37:33.125217 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/59c252f7-f8f5-47d3-96fe-142964e99d96-clustermesh-secrets\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125660 kubelet[2526]: I0904 17:37:33.125243 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-cilium-run\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125660 kubelet[2526]: I0904 17:37:33.125263 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-cilium-cgroup\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125660 kubelet[2526]: I0904 17:37:33.125300 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-host-proc-sys-kernel\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125660 kubelet[2526]: I0904 17:37:33.125568 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-hostproc\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125782 kubelet[2526]: I0904 17:37:33.125597 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/59c252f7-f8f5-47d3-96fe-142964e99d96-host-proc-sys-net\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125782 kubelet[2526]: I0904 17:37:33.125663 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8tgp\" (UniqueName: \"kubernetes.io/projected/59c252f7-f8f5-47d3-96fe-142964e99d96-kube-api-access-p8tgp\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.125782 kubelet[2526]: I0904 17:37:33.125687 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/59c252f7-f8f5-47d3-96fe-142964e99d96-cilium-ipsec-secrets\") pod \"cilium-jfbsz\" (UID: \"59c252f7-f8f5-47d3-96fe-142964e99d96\") " pod="kube-system/cilium-jfbsz" Sep 4 17:37:33.129109 systemd-logind[1413]: New session 26 of user core. Sep 4 17:37:33.144041 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:37:33.196310 sshd[4352]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:33.212084 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:37914.service: Deactivated successfully. Sep 4 17:37:33.215333 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:37:33.217208 systemd-logind[1413]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:37:33.227285 systemd[1]: Started sshd@26-10.0.0.135:22-10.0.0.1:37922.service - OpenSSH per-connection server daemon (10.0.0.1:37922). Sep 4 17:37:33.245950 systemd-logind[1413]: Removed session 26. Sep 4 17:37:33.275293 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 37922 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:37:33.277024 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:33.281840 systemd-logind[1413]: New session 27 of user core. Sep 4 17:37:33.288971 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:37:33.410994 kubelet[2526]: E0904 17:37:33.408021 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:33.411149 containerd[1433]: time="2024-09-04T17:37:33.410179710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfbsz,Uid:59c252f7-f8f5-47d3-96fe-142964e99d96,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:33.438026 containerd[1433]: time="2024-09-04T17:37:33.437635602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:33.438026 containerd[1433]: time="2024-09-04T17:37:33.437689523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:33.438026 containerd[1433]: time="2024-09-04T17:37:33.437703963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:33.438026 containerd[1433]: time="2024-09-04T17:37:33.437715723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:33.458994 systemd[1]: Started cri-containerd-e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d.scope - libcontainer container e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d. Sep 4 17:37:33.493959 containerd[1433]: time="2024-09-04T17:37:33.493914963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfbsz,Uid:59c252f7-f8f5-47d3-96fe-142964e99d96,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\"" Sep 4 17:37:33.494714 kubelet[2526]: E0904 17:37:33.494690 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:33.496964 containerd[1433]: time="2024-09-04T17:37:33.496914719Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:37:33.507939 containerd[1433]: time="2024-09-04T17:37:33.507881292Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1\"" Sep 4 17:37:33.509480 containerd[1433]: time="2024-09-04T17:37:33.509439070Z" level=info msg="StartContainer for \"005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1\"" Sep 4 17:37:33.540004 systemd[1]: Started cri-containerd-005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1.scope - libcontainer container 005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1. Sep 4 17:37:33.565355 containerd[1433]: time="2024-09-04T17:37:33.565309546Z" level=info msg="StartContainer for \"005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1\" returns successfully" Sep 4 17:37:33.587182 systemd[1]: cri-containerd-005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1.scope: Deactivated successfully. Sep 4 17:37:33.622415 containerd[1433]: time="2024-09-04T17:37:33.622343996Z" level=info msg="shim disconnected" id=005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1 namespace=k8s.io Sep 4 17:37:33.622415 containerd[1433]: time="2024-09-04T17:37:33.622400077Z" level=warning msg="cleaning up after shim disconnected" id=005e29b66a48e0ee55082d88f604cdf738df93fa6cc90748b1932ff55c26b5e1 namespace=k8s.io Sep 4 17:37:33.622415 containerd[1433]: time="2024-09-04T17:37:33.622409957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:33.635581 kubelet[2526]: E0904 17:37:33.635539 2526 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:37:33.814814 kubelet[2526]: E0904 17:37:33.814653 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:33.818490 containerd[1433]: time="2024-09-04T17:37:33.818335807Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:37:33.834786 containerd[1433]: time="2024-09-04T17:37:33.834728445Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1\"" Sep 4 17:37:33.835597 containerd[1433]: time="2024-09-04T17:37:33.835564295Z" level=info msg="StartContainer for \"d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1\"" Sep 4 17:37:33.859988 systemd[1]: Started cri-containerd-d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1.scope - libcontainer container d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1. Sep 4 17:37:33.886031 containerd[1433]: time="2024-09-04T17:37:33.885966425Z" level=info msg="StartContainer for \"d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1\" returns successfully" Sep 4 17:37:33.892697 systemd[1]: cri-containerd-d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1.scope: Deactivated successfully. Sep 4 17:37:33.916753 containerd[1433]: time="2024-09-04T17:37:33.916628236Z" level=info msg="shim disconnected" id=d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1 namespace=k8s.io Sep 4 17:37:33.916753 containerd[1433]: time="2024-09-04T17:37:33.916742437Z" level=warning msg="cleaning up after shim disconnected" id=d1967c643f2df4301a866878d9f44d2baef2d53acae6b7785bfe6fcca12a6aa1 namespace=k8s.io Sep 4 17:37:33.916753 containerd[1433]: time="2024-09-04T17:37:33.916753557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:34.819249 kubelet[2526]: E0904 17:37:34.819190 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:34.821620 containerd[1433]: time="2024-09-04T17:37:34.821571676Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:37:34.843849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008748549.mount: Deactivated successfully. Sep 4 17:37:34.862179 containerd[1433]: time="2024-09-04T17:37:34.862115795Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b\"" Sep 4 17:37:34.866799 containerd[1433]: time="2024-09-04T17:37:34.864121780Z" level=info msg="StartContainer for \"50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b\"" Sep 4 17:37:34.894024 systemd[1]: Started cri-containerd-50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b.scope - libcontainer container 50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b. Sep 4 17:37:34.923620 systemd[1]: cri-containerd-50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b.scope: Deactivated successfully. Sep 4 17:37:34.928364 containerd[1433]: time="2024-09-04T17:37:34.928321362Z" level=info msg="StartContainer for \"50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b\" returns successfully" Sep 4 17:37:34.958304 containerd[1433]: time="2024-09-04T17:37:34.958241464Z" level=info msg="shim disconnected" id=50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b namespace=k8s.io Sep 4 17:37:34.958304 containerd[1433]: time="2024-09-04T17:37:34.958297545Z" level=warning msg="cleaning up after shim disconnected" id=50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b namespace=k8s.io Sep 4 17:37:34.958304 containerd[1433]: time="2024-09-04T17:37:34.958306185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:35.234173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50d1e24341838df54f32ba00330fc46e548a0019fab13f565f67c956870e030b-rootfs.mount: Deactivated successfully. Sep 4 17:37:35.532193 kubelet[2526]: I0904 17:37:35.532089 2526 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:37:35Z","lastTransitionTime":"2024-09-04T17:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:37:35.821823 kubelet[2526]: E0904 17:37:35.821706 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:35.825150 containerd[1433]: time="2024-09-04T17:37:35.825084474Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:37:35.851123 containerd[1433]: time="2024-09-04T17:37:35.851028743Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27\"" Sep 4 17:37:35.851699 containerd[1433]: time="2024-09-04T17:37:35.851637512Z" level=info msg="StartContainer for \"1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27\"" Sep 4 17:37:35.891028 systemd[1]: Started cri-containerd-1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27.scope - libcontainer container 1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27. Sep 4 17:37:35.915423 systemd[1]: cri-containerd-1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27.scope: Deactivated successfully. Sep 4 17:37:35.919368 containerd[1433]: time="2024-09-04T17:37:35.919325224Z" level=info msg="StartContainer for \"1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27\" returns successfully" Sep 4 17:37:35.951488 containerd[1433]: time="2024-09-04T17:37:35.951365735Z" level=info msg="shim disconnected" id=1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27 namespace=k8s.io Sep 4 17:37:35.951488 containerd[1433]: time="2024-09-04T17:37:35.951462216Z" level=warning msg="cleaning up after shim disconnected" id=1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27 namespace=k8s.io Sep 4 17:37:35.951949 containerd[1433]: time="2024-09-04T17:37:35.951532937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:36.234285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c05d636191bc54fec8286bdcddf205affa53495c2c7d6401669809390f84b27-rootfs.mount: Deactivated successfully. Sep 4 17:37:36.826634 kubelet[2526]: E0904 17:37:36.825779 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:36.831137 containerd[1433]: time="2024-09-04T17:37:36.830969571Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:37:36.852662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605223352.mount: Deactivated successfully. Sep 4 17:37:36.857646 containerd[1433]: time="2024-09-04T17:37:36.857595187Z" level=info msg="CreateContainer within sandbox \"e9a3d35f1b5d584763775ef62308c5223e0b4d80f14dd75c8387984e759c7e8d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4a018ca292e565bc003810ee5b2b1a6dc0db80c36c75995f5bb82a3873fa481\"" Sep 4 17:37:36.859811 containerd[1433]: time="2024-09-04T17:37:36.859760777Z" level=info msg="StartContainer for \"e4a018ca292e565bc003810ee5b2b1a6dc0db80c36c75995f5bb82a3873fa481\"" Sep 4 17:37:36.897992 systemd[1]: Started cri-containerd-e4a018ca292e565bc003810ee5b2b1a6dc0db80c36c75995f5bb82a3873fa481.scope - libcontainer container e4a018ca292e565bc003810ee5b2b1a6dc0db80c36c75995f5bb82a3873fa481. Sep 4 17:37:36.930452 containerd[1433]: time="2024-09-04T17:37:36.930392255Z" level=info msg="StartContainer for \"e4a018ca292e565bc003810ee5b2b1a6dc0db80c36c75995f5bb82a3873fa481\" returns successfully" Sep 4 17:37:37.209946 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 17:37:37.830817 kubelet[2526]: E0904 17:37:37.830768 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:37.845010 kubelet[2526]: I0904 17:37:37.844514 2526 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jfbsz" podStartSLOduration=4.844475826 podStartE2EDuration="4.844475826s" podCreationTimestamp="2024-09-04 17:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:37.844324224 +0000 UTC m=+94.394245027" watchObservedRunningTime="2024-09-04 17:37:37.844475826 +0000 UTC m=+94.394396549" Sep 4 17:37:39.409492 kubelet[2526]: E0904 17:37:39.409453 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:40.137205 systemd-networkd[1373]: lxc_health: Link UP Sep 4 17:37:40.145068 systemd-networkd[1373]: lxc_health: Gained carrier Sep 4 17:37:41.410932 kubelet[2526]: E0904 17:37:41.410899 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:41.732918 systemd-networkd[1373]: lxc_health: Gained IPv6LL Sep 4 17:37:41.822636 kubelet[2526]: E0904 17:37:41.822494 2526 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:59264->127.0.0.1:38555: read: connection reset by peer Sep 4 17:37:41.823204 kubelet[2526]: E0904 17:37:41.823086 2526 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59264->127.0.0.1:38555: write tcp 127.0.0.1:59264->127.0.0.1:38555: write: broken pipe Sep 4 17:37:41.841205 kubelet[2526]: E0904 17:37:41.841096 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:42.843346 kubelet[2526]: E0904 17:37:42.843316 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:43.566691 kubelet[2526]: E0904 17:37:43.566650 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:46.053149 kubelet[2526]: E0904 17:37:46.053055 2526 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:46956->127.0.0.1:38555: read: connection reset by peer Sep 4 17:37:46.053149 kubelet[2526]: E0904 17:37:46.053117 2526 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46956->127.0.0.1:38555: write tcp 127.0.0.1:46956->127.0.0.1:38555: write: broken pipe Sep 4 17:37:46.055733 sshd[4360]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:46.058943 systemd[1]: sshd@26-10.0.0.135:22-10.0.0.1:37922.service: Deactivated successfully. Sep 4 17:37:46.060933 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:37:46.062546 systemd-logind[1413]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:37:46.064060 systemd-logind[1413]: Removed session 27. Sep 4 17:37:47.567751 kubelet[2526]: E0904 17:37:47.567711 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"