May 8 00:19:15.886815 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:19:15.886837 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 7 22:57:52 -00 2025 May 8 00:19:15.886846 kernel: KASLR enabled May 8 00:19:15.886852 kernel: efi: EFI v2.7 by EDK II May 8 00:19:15.886857 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 00:19:15.886863 kernel: random: crng init done May 8 00:19:15.886870 kernel: ACPI: Early table checksum verification disabled May 8 00:19:15.886876 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 00:19:15.886882 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:19:15.886889 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886895 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886901 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886907 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886913 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886920 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886928 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886934 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886941 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:15.886947 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:19:15.886953 kernel: NUMA: Failed to initialise from firmware May 8 00:19:15.886959 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:19:15.886965 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] May 8 00:19:15.886971 kernel: Zone ranges: May 8 00:19:15.886978 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:19:15.886984 kernel: DMA32 empty May 8 00:19:15.886991 kernel: Normal empty May 8 00:19:15.886997 kernel: Movable zone start for each node May 8 00:19:15.887003 kernel: Early memory node ranges May 8 00:19:15.887010 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 00:19:15.887016 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 00:19:15.887022 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 00:19:15.887029 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 00:19:15.887035 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 00:19:15.887041 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 00:19:15.887048 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 00:19:15.887054 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:19:15.887060 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:19:15.887067 kernel: psci: probing for conduit method from ACPI. May 8 00:19:15.887074 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:19:15.887080 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:19:15.887089 kernel: psci: Trusted OS migration not required May 8 00:19:15.887095 kernel: psci: SMC Calling Convention v1.1 May 8 00:19:15.887102 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:19:15.887110 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 8 00:19:15.887116 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 8 00:19:15.887123 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:19:15.887130 kernel: Detected PIPT I-cache on CPU0 May 8 00:19:15.887136 kernel: CPU features: detected: GIC system register CPU interface May 8 00:19:15.887143 kernel: CPU features: detected: Hardware dirty bit management May 8 00:19:15.887150 kernel: CPU features: detected: Spectre-v4 May 8 00:19:15.887156 kernel: CPU features: detected: Spectre-BHB May 8 00:19:15.887163 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:19:15.887170 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:19:15.887178 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:19:15.887184 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:19:15.887191 kernel: alternatives: applying boot alternatives May 8 00:19:15.887198 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:19:15.887205 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:19:15.887212 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:19:15.887219 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:19:15.887225 kernel: Fallback order for Node 0: 0 May 8 00:19:15.887232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:19:15.887238 kernel: Policy zone: DMA May 8 00:19:15.887245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:19:15.887252 kernel: software IO TLB: area num 4. May 8 00:19:15.887259 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 00:19:15.887267 kernel: Memory: 2386480K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185808K reserved, 0K cma-reserved) May 8 00:19:15.887274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:19:15.887352 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:19:15.887360 kernel: rcu: RCU event tracing is enabled. May 8 00:19:15.887367 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:19:15.887374 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:19:15.887381 kernel: Tracing variant of Tasks RCU enabled. May 8 00:19:15.887387 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:19:15.887394 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:19:15.887401 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:19:15.887410 kernel: GICv3: 256 SPIs implemented May 8 00:19:15.887417 kernel: GICv3: 0 Extended SPIs implemented May 8 00:19:15.887424 kernel: Root IRQ handler: gic_handle_irq May 8 00:19:15.887430 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 00:19:15.887437 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:19:15.887444 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:19:15.887450 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:19:15.887457 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 00:19:15.887464 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 00:19:15.887470 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 00:19:15.887477 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:19:15.887485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:19:15.887491 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:19:15.887498 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:19:15.887505 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:19:15.887512 kernel: arm-pv: using stolen time PV May 8 00:19:15.887519 kernel: Console: colour dummy device 80x25 May 8 00:19:15.887526 kernel: ACPI: Core revision 20230628 May 8 00:19:15.887533 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:19:15.887540 kernel: pid_max: default: 32768 minimum: 301 May 8 00:19:15.887546 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:19:15.887554 kernel: landlock: Up and running. May 8 00:19:15.887561 kernel: SELinux: Initializing. May 8 00:19:15.887568 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:19:15.887575 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:19:15.887581 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:19:15.887588 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:19:15.887595 kernel: rcu: Hierarchical SRCU implementation. May 8 00:19:15.887602 kernel: rcu: Max phase no-delay instances is 400. May 8 00:19:15.887608 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:19:15.887616 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:19:15.887623 kernel: Remapping and enabling EFI services. May 8 00:19:15.887630 kernel: smp: Bringing up secondary CPUs ... May 8 00:19:15.887636 kernel: Detected PIPT I-cache on CPU1 May 8 00:19:15.887643 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:19:15.887650 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 00:19:15.887657 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:19:15.887663 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:19:15.887670 kernel: Detected PIPT I-cache on CPU2 May 8 00:19:15.887677 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:19:15.887685 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 00:19:15.887692 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:19:15.887703 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:19:15.887712 kernel: Detected PIPT I-cache on CPU3 May 8 00:19:15.887719 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:19:15.887726 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 00:19:15.887733 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:19:15.887740 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:19:15.887747 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:19:15.887756 kernel: SMP: Total of 4 processors activated. May 8 00:19:15.887763 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:19:15.887770 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:19:15.887783 kernel: CPU features: detected: Common not Private translations May 8 00:19:15.887791 kernel: CPU features: detected: CRC32 instructions May 8 00:19:15.887798 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 00:19:15.887805 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:19:15.887812 kernel: CPU features: detected: LSE atomic instructions May 8 00:19:15.887821 kernel: CPU features: detected: Privileged Access Never May 8 00:19:15.887828 kernel: CPU features: detected: RAS Extension Support May 8 00:19:15.887835 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:19:15.887842 kernel: CPU: All CPU(s) started at EL1 May 8 00:19:15.887849 kernel: alternatives: applying system-wide alternatives May 8 00:19:15.887856 kernel: devtmpfs: initialized May 8 00:19:15.887864 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:19:15.887871 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:19:15.887878 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:19:15.887886 kernel: SMBIOS 3.0.0 present. May 8 00:19:15.887893 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 00:19:15.887900 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:19:15.887908 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:19:15.887915 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:19:15.887922 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:19:15.887930 kernel: audit: initializing netlink subsys (disabled) May 8 00:19:15.887937 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 May 8 00:19:15.887944 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:19:15.887952 kernel: cpuidle: using governor menu May 8 00:19:15.887960 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:19:15.887967 kernel: ASID allocator initialised with 32768 entries May 8 00:19:15.887974 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:19:15.887981 kernel: Serial: AMBA PL011 UART driver May 8 00:19:15.887989 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 00:19:15.887996 kernel: Modules: 0 pages in range for non-PLT usage May 8 00:19:15.888003 kernel: Modules: 509024 pages in range for PLT usage May 8 00:19:15.888010 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:19:15.888018 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:19:15.888026 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:19:15.888033 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 00:19:15.888040 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:19:15.888047 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:19:15.888054 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:19:15.888061 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 00:19:15.888069 kernel: ACPI: Added _OSI(Module Device) May 8 00:19:15.888076 kernel: ACPI: Added _OSI(Processor Device) May 8 00:19:15.888084 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:19:15.888091 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:19:15.888098 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:19:15.888106 kernel: ACPI: Interpreter enabled May 8 00:19:15.888113 kernel: ACPI: Using GIC for interrupt routing May 8 00:19:15.888125 kernel: ACPI: MCFG table detected, 1 entries May 8 00:19:15.888132 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:19:15.888139 kernel: printk: console [ttyAMA0] enabled May 8 00:19:15.888146 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:19:15.888291 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:19:15.888384 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:19:15.888453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:19:15.888520 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:19:15.888581 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:19:15.888591 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:19:15.888598 kernel: PCI host bridge to bus 0000:00 May 8 00:19:15.888670 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:19:15.888728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:19:15.888792 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:19:15.888850 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:19:15.888948 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:19:15.889023 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:19:15.889094 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:19:15.889159 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:19:15.889222 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:19:15.889324 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:19:15.889395 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:19:15.889460 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:19:15.889517 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:19:15.889577 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:19:15.889634 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:19:15.889643 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:19:15.889651 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:19:15.889658 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:19:15.889665 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:19:15.889672 kernel: iommu: Default domain type: Translated May 8 00:19:15.889679 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:19:15.889687 kernel: efivars: Registered efivars operations May 8 00:19:15.889696 kernel: vgaarb: loaded May 8 00:19:15.889703 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:19:15.889710 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:19:15.889717 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:19:15.889724 kernel: pnp: PnP ACPI init May 8 00:19:15.889800 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:19:15.889812 kernel: pnp: PnP ACPI: found 1 devices May 8 00:19:15.889819 kernel: NET: Registered PF_INET protocol family May 8 00:19:15.889828 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:19:15.889836 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:19:15.889843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:19:15.889851 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:19:15.889858 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:19:15.889865 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:19:15.889872 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:19:15.889880 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:19:15.889887 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:19:15.889895 kernel: PCI: CLS 0 bytes, default 64 May 8 00:19:15.889903 kernel: kvm [1]: HYP mode not available May 8 00:19:15.889910 kernel: Initialise system trusted keyrings May 8 00:19:15.889917 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:19:15.889924 kernel: Key type asymmetric registered May 8 00:19:15.889931 kernel: Asymmetric key parser 'x509' registered May 8 00:19:15.889938 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 00:19:15.889946 kernel: io scheduler mq-deadline registered May 8 00:19:15.889953 kernel: io scheduler kyber registered May 8 00:19:15.889961 kernel: io scheduler bfq registered May 8 00:19:15.889969 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:19:15.889976 kernel: ACPI: button: Power Button [PWRB] May 8 00:19:15.889984 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:19:15.890049 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:19:15.890059 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:19:15.890066 kernel: thunder_xcv, ver 1.0 May 8 00:19:15.890073 kernel: thunder_bgx, ver 1.0 May 8 00:19:15.890080 kernel: nicpf, ver 1.0 May 8 00:19:15.890089 kernel: nicvf, ver 1.0 May 8 00:19:15.890160 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:19:15.890221 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:19:15 UTC (1746663555) May 8 00:19:15.890231 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:19:15.890238 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:19:15.890246 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 00:19:15.890253 kernel: watchdog: Hard watchdog permanently disabled May 8 00:19:15.890261 kernel: NET: Registered PF_INET6 protocol family May 8 00:19:15.890270 kernel: Segment Routing with IPv6 May 8 00:19:15.890277 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:19:15.890300 kernel: NET: Registered PF_PACKET protocol family May 8 00:19:15.890308 kernel: Key type dns_resolver registered May 8 00:19:15.890315 kernel: registered taskstats version 1 May 8 00:19:15.890322 kernel: Loading compiled-in X.509 certificates May 8 00:19:15.890330 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e350a514a19a92525be490be8fe368f9972240ea' May 8 00:19:15.890337 kernel: Key type .fscrypt registered May 8 00:19:15.890344 kernel: Key type fscrypt-provisioning registered May 8 00:19:15.890353 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:19:15.890360 kernel: ima: Allocated hash algorithm: sha1 May 8 00:19:15.890368 kernel: ima: No architecture policies found May 8 00:19:15.890375 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:19:15.890382 kernel: clk: Disabling unused clocks May 8 00:19:15.890389 kernel: Freeing unused kernel memory: 39424K May 8 00:19:15.890397 kernel: Run /init as init process May 8 00:19:15.890404 kernel: with arguments: May 8 00:19:15.890411 kernel: /init May 8 00:19:15.890419 kernel: with environment: May 8 00:19:15.890426 kernel: HOME=/ May 8 00:19:15.890433 kernel: TERM=linux May 8 00:19:15.890440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:19:15.890449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:19:15.890458 systemd[1]: Detected virtualization kvm. May 8 00:19:15.890466 systemd[1]: Detected architecture arm64. May 8 00:19:15.890473 systemd[1]: Running in initrd. May 8 00:19:15.890482 systemd[1]: No hostname configured, using default hostname. May 8 00:19:15.890489 systemd[1]: Hostname set to . May 8 00:19:15.890497 systemd[1]: Initializing machine ID from VM UUID. May 8 00:19:15.890505 systemd[1]: Queued start job for default target initrd.target. May 8 00:19:15.890513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:19:15.890520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:19:15.890528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:19:15.890536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:19:15.890546 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:19:15.890553 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:19:15.890563 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:19:15.890571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:19:15.890579 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:19:15.890587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:19:15.890596 systemd[1]: Reached target paths.target - Path Units. May 8 00:19:15.890603 systemd[1]: Reached target slices.target - Slice Units. May 8 00:19:15.890611 systemd[1]: Reached target swap.target - Swaps. May 8 00:19:15.890619 systemd[1]: Reached target timers.target - Timer Units. May 8 00:19:15.890627 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:19:15.890635 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:19:15.890654 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:19:15.890662 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:19:15.890670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:19:15.890679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:19:15.890687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:19:15.890695 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:19:15.890703 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:19:15.890710 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:19:15.890718 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:19:15.890726 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:19:15.890733 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:19:15.890741 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:19:15.890750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:15.890758 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:19:15.890765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:19:15.890778 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:19:15.890786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:19:15.890796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:15.890821 systemd-journald[238]: Collecting audit messages is disabled. May 8 00:19:15.890839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:19:15.890849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:19:15.890858 systemd-journald[238]: Journal started May 8 00:19:15.890876 systemd-journald[238]: Runtime Journal (/run/log/journal/53d52480543547f1b4006afc759388cf) is 5.9M, max 47.3M, 41.4M free. May 8 00:19:15.881920 systemd-modules-load[239]: Inserted module 'overlay' May 8 00:19:15.894312 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:19:15.894353 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:19:15.897022 systemd-modules-load[239]: Inserted module 'br_netfilter' May 8 00:19:15.897816 kernel: Bridge firewalling registered May 8 00:19:15.899324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:19:15.911472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:19:15.912903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:19:15.914571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:19:15.916124 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:15.921445 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:19:15.923515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:19:15.925562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:19:15.926792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:19:15.930234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:19:15.936724 dracut-cmdline[269]: dracut-dracut-053 May 8 00:19:15.939059 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ed66668e4cab2597a697b6f83cdcbc6a64a98dbc7e2125304191704297c07daf May 8 00:19:15.956911 systemd-resolved[281]: Positive Trust Anchors: May 8 00:19:15.956925 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:19:15.956958 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:19:15.961600 systemd-resolved[281]: Defaulting to hostname 'linux'. May 8 00:19:15.962606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:19:15.967577 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:19:16.011306 kernel: SCSI subsystem initialized May 8 00:19:16.017297 kernel: Loading iSCSI transport class v2.0-870. May 8 00:19:16.025308 kernel: iscsi: registered transport (tcp) May 8 00:19:16.039328 kernel: iscsi: registered transport (qla4xxx) May 8 00:19:16.039348 kernel: QLogic iSCSI HBA Driver May 8 00:19:16.080482 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:19:16.092453 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:19:16.107716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:19:16.107762 kernel: device-mapper: uevent: version 1.0.3 May 8 00:19:16.107787 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:19:16.154325 kernel: raid6: neonx8 gen() 15770 MB/s May 8 00:19:16.171314 kernel: raid6: neonx4 gen() 15654 MB/s May 8 00:19:16.188294 kernel: raid6: neonx2 gen() 13218 MB/s May 8 00:19:16.205296 kernel: raid6: neonx1 gen() 10495 MB/s May 8 00:19:16.222304 kernel: raid6: int64x8 gen() 6943 MB/s May 8 00:19:16.239296 kernel: raid6: int64x4 gen() 7290 MB/s May 8 00:19:16.256296 kernel: raid6: int64x2 gen() 6106 MB/s May 8 00:19:16.273295 kernel: raid6: int64x1 gen() 5041 MB/s May 8 00:19:16.273309 kernel: raid6: using algorithm neonx8 gen() 15770 MB/s May 8 00:19:16.290312 kernel: raid6: .... xor() 11880 MB/s, rmw enabled May 8 00:19:16.290338 kernel: raid6: using neon recovery algorithm May 8 00:19:16.296293 kernel: xor: measuring software checksum speed May 8 00:19:16.296310 kernel: 8regs : 19222 MB/sec May 8 00:19:16.297767 kernel: 32regs : 18452 MB/sec May 8 00:19:16.297787 kernel: arm64_neon : 26981 MB/sec May 8 00:19:16.297797 kernel: xor: using function: arm64_neon (26981 MB/sec) May 8 00:19:16.349300 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:19:16.361359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:19:16.367428 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:19:16.379019 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 8 00:19:16.382119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:19:16.389452 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:19:16.400536 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 8 00:19:16.427355 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:19:16.436426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:19:16.477856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:19:16.485679 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:19:16.495808 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:19:16.500185 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:19:16.501312 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:19:16.503350 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:19:16.513474 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:19:16.524581 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:19:16.537307 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 00:19:16.543448 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:19:16.543544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:19:16.543556 kernel: GPT:9289727 != 19775487 May 8 00:19:16.543565 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:19:16.543574 kernel: GPT:9289727 != 19775487 May 8 00:19:16.543582 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:19:16.543597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:19:16.538509 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:19:16.538625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:16.542496 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:19:16.543337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:19:16.543469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:16.544272 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:16.554488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:16.565692 kernel: BTRFS: device fsid 0be52225-f929-4b89-9354-df54a643ece0 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) May 8 00:19:16.565740 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) May 8 00:19:16.571063 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:19:16.575382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:16.580498 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:19:16.588002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:19:16.592006 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:19:16.593220 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:19:16.606427 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:19:16.608212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:19:16.613161 disk-uuid[551]: Primary Header is updated. May 8 00:19:16.613161 disk-uuid[551]: Secondary Entries is updated. May 8 00:19:16.613161 disk-uuid[551]: Secondary Header is updated. May 8 00:19:16.621298 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:19:16.629090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:17.638327 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:19:17.638386 disk-uuid[552]: The operation has completed successfully. May 8 00:19:17.668238 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:19:17.668343 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:19:17.682497 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:19:17.685910 sh[575]: Success May 8 00:19:17.715328 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:19:17.752841 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:19:17.762962 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:19:17.764392 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:19:17.775330 kernel: BTRFS info (device dm-0): first mount of filesystem 0be52225-f929-4b89-9354-df54a643ece0 May 8 00:19:17.775366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 00:19:17.775378 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:19:17.775388 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:19:17.775754 kernel: BTRFS info (device dm-0): using free space tree May 8 00:19:17.780098 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:19:17.780992 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:19:17.795519 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:19:17.798066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:19:17.806727 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:19:17.806785 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:19:17.806797 kernel: BTRFS info (device vda6): using free space tree May 8 00:19:17.812524 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:19:17.820025 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:19:17.822319 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:19:17.828914 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:19:17.835528 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:19:17.896525 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:19:17.904431 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:19:17.935463 systemd-networkd[763]: lo: Link UP May 8 00:19:17.935470 systemd-networkd[763]: lo: Gained carrier May 8 00:19:17.936158 systemd-networkd[763]: Enumeration completed May 8 00:19:17.936251 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:19:17.936735 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:19:17.936738 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:19:17.937407 systemd[1]: Reached target network.target - Network. May 8 00:19:17.942060 systemd-networkd[763]: eth0: Link UP May 8 00:19:17.942063 systemd-networkd[763]: eth0: Gained carrier May 8 00:19:17.942070 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:19:17.947918 ignition[669]: Ignition 2.19.0 May 8 00:19:17.947927 ignition[669]: Stage: fetch-offline May 8 00:19:17.947961 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 8 00:19:17.947970 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:19:17.948178 ignition[669]: parsed url from cmdline: "" May 8 00:19:17.948181 ignition[669]: no config URL provided May 8 00:19:17.948185 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:19:17.948192 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 8 00:19:17.948214 ignition[669]: op(1): [started] loading QEMU firmware config module May 8 00:19:17.948218 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:19:17.957996 ignition[669]: op(1): [finished] loading QEMU firmware config module May 8 00:19:17.959349 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:19:17.995965 ignition[669]: parsing config with SHA512: d68c563a3d42799e487568ca832ed251c6a810182c4b7760657cdf0afb1ea42d0534f7a1862c0862afc2780bbb1686023adba8f578efd0f98c4b8a3330cf62cb May 8 00:19:18.000542 unknown[669]: fetched base config from "system" May 8 00:19:18.000551 unknown[669]: fetched user config from "qemu" May 8 00:19:18.000979 ignition[669]: fetch-offline: fetch-offline passed May 8 00:19:18.001041 ignition[669]: Ignition finished successfully May 8 00:19:18.002746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:19:18.006590 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:19:18.012449 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:19:18.023948 ignition[774]: Ignition 2.19.0 May 8 00:19:18.023957 ignition[774]: Stage: kargs May 8 00:19:18.024120 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 8 00:19:18.024129 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:19:18.025017 ignition[774]: kargs: kargs passed May 8 00:19:18.025064 ignition[774]: Ignition finished successfully May 8 00:19:18.027988 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:19:18.038473 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:19:18.048108 ignition[782]: Ignition 2.19.0 May 8 00:19:18.048118 ignition[782]: Stage: disks May 8 00:19:18.048327 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 8 00:19:18.048338 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:19:18.049313 ignition[782]: disks: disks passed May 8 00:19:18.051497 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:19:18.049365 ignition[782]: Ignition finished successfully May 8 00:19:18.052838 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:19:18.055382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:19:18.056683 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:19:18.058107 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:19:18.059572 systemd[1]: Reached target basic.target - Basic System. May 8 00:19:18.073433 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:19:18.082774 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:19:18.085967 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:19:18.088670 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:19:18.131636 kernel: EXT4-fs (vda9): mounted filesystem f1546e2a-34df-485a-a644-37e10cd925e0 r/w with ordered data mode. Quota mode: none. May 8 00:19:18.132082 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:19:18.133185 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:19:18.145430 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:19:18.146991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:19:18.148082 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:19:18.148156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:19:18.148210 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:19:18.154045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:19:18.155415 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) May 8 00:19:18.155436 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:19:18.157428 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:19:18.157465 kernel: BTRFS info (device vda6): using free space tree May 8 00:19:18.155862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:19:18.161304 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:19:18.162218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:19:18.196005 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:19:18.200175 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 8 00:19:18.203340 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:19:18.207489 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:19:18.274816 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:19:18.284401 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:19:18.285776 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:19:18.290300 kernel: BTRFS info (device vda6): last unmount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:19:18.305179 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:19:18.308223 ignition[914]: INFO : Ignition 2.19.0 May 8 00:19:18.308942 ignition[914]: INFO : Stage: mount May 8 00:19:18.308942 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:19:18.308942 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:19:18.311735 ignition[914]: INFO : mount: mount passed May 8 00:19:18.311735 ignition[914]: INFO : Ignition finished successfully May 8 00:19:18.312986 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:19:18.320409 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:19:18.773823 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:19:18.783537 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:19:18.788299 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 8 00:19:18.790413 kernel: BTRFS info (device vda6): first mount of filesystem a4a0b304-74d7-4600-bc4f-fa8751ae54a8 May 8 00:19:18.790428 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:19:18.790437 kernel: BTRFS info (device vda6): using free space tree May 8 00:19:18.792296 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:19:18.793395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:19:18.808941 ignition[945]: INFO : Ignition 2.19.0 May 8 00:19:18.808941 ignition[945]: INFO : Stage: files May 8 00:19:18.810204 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:19:18.810204 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:19:18.810204 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 8 00:19:18.812892 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:19:18.812892 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:19:18.812892 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:19:18.812892 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:19:18.816890 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:19:18.816890 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 00:19:18.816890 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 8 00:19:18.813113 unknown[945]: wrote ssh authorized keys file for user: core May 8 00:19:19.136527 systemd-networkd[763]: eth0: Gained IPv6LL May 8 00:19:19.376728 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:19:19.621700 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 00:19:19.621700 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:19:19.624686 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 00:19:19.959149 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:19:20.045842 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:19:20.047427 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 8 00:19:20.303510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:19:20.660249 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 00:19:20.660249 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:19:20.662911 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:19:20.689484 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:19:20.693469 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:19:20.695377 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:19:20.695377 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:19:20.695377 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:19:20.695377 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:19:20.695377 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:19:20.695377 ignition[945]: INFO : files: files passed May 8 00:19:20.695377 ignition[945]: INFO : Ignition finished successfully May 8 00:19:20.698773 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:19:20.711433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:19:20.714077 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:19:20.715251 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:19:20.715358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:19:20.722271 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:19:20.725987 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:19:20.725987 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:19:20.728403 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:19:20.728694 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:19:20.730779 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:19:20.752478 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:19:20.772407 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:19:20.773208 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:19:20.774336 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:19:20.775051 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:19:20.776554 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:19:20.777360 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:19:20.792818 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:19:20.806465 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:19:20.814818 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:19:20.815767 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:19:20.817253 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:19:20.818641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:19:20.818779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:19:20.821664 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:19:20.823134 systemd[1]: Stopped target basic.target - Basic System. May 8 00:19:20.824381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:19:20.825704 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:19:20.827249 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:19:20.828714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:19:20.830080 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:19:20.831535 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:19:20.832940 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:19:20.834231 systemd[1]: Stopped target swap.target - Swaps. May 8 00:19:20.835346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:19:20.835473 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:19:20.837180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:19:20.838606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:19:20.840015 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:19:20.843332 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:19:20.844238 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:19:20.844378 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:19:20.846446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:19:20.846562 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:19:20.847986 systemd[1]: Stopped target paths.target - Path Units. May 8 00:19:20.849111 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:19:20.852355 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:19:20.853299 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:19:20.854885 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:19:20.855993 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:19:20.856081 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:19:20.857165 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:19:20.857243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:19:20.858370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:19:20.858479 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:19:20.859777 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:19:20.859874 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:19:20.879485 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:19:20.880969 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:19:20.881653 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:19:20.881785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:19:20.883192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:19:20.883372 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:19:20.888014 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:19:20.888160 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:19:20.892578 ignition[1000]: INFO : Ignition 2.19.0 May 8 00:19:20.892578 ignition[1000]: INFO : Stage: umount May 8 00:19:20.894681 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:19:20.894681 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:19:20.894681 ignition[1000]: INFO : umount: umount passed May 8 00:19:20.894681 ignition[1000]: INFO : Ignition finished successfully May 8 00:19:20.895364 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:19:20.895856 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:19:20.897306 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:19:20.900501 systemd[1]: Stopped target network.target - Network. May 8 00:19:20.901256 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:19:20.901355 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:19:20.902985 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:19:20.903026 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:19:20.904242 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:19:20.904340 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:19:20.905622 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:19:20.905669 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:19:20.907135 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:19:20.908324 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:19:20.909819 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:19:20.909908 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:19:20.911345 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:19:20.911429 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:19:20.916446 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:19:20.916550 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:19:20.918628 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:19:20.918681 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:19:20.919338 systemd-networkd[763]: eth0: DHCPv6 lease lost May 8 00:19:20.921050 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:19:20.921152 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:19:20.924356 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:19:20.924393 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:19:20.930389 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:19:20.931025 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:19:20.931082 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:19:20.932558 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:19:20.932601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:19:20.933856 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:19:20.933895 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:19:20.935523 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:19:20.943591 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:19:20.943702 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:19:20.956093 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:19:20.956259 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:19:20.958123 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:19:20.958164 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:19:20.959313 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:19:20.959344 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:19:20.960628 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:19:20.960673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:19:20.962682 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:19:20.962727 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:19:20.964629 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:19:20.964672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:20.982442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:19:20.983192 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:19:20.983245 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:19:20.984854 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:19:20.984899 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:19:20.986289 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:19:20.986327 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:19:20.987928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:19:20.987967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:20.989601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:19:20.989680 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:19:20.991457 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:19:20.993061 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:19:21.002719 systemd[1]: Switching root. May 8 00:19:21.028173 systemd-journald[238]: Journal stopped May 8 00:19:21.738040 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 8 00:19:21.738097 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:19:21.738113 kernel: SELinux: policy capability open_perms=1 May 8 00:19:21.738123 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:19:21.738136 kernel: SELinux: policy capability always_check_network=0 May 8 00:19:21.738146 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:19:21.738155 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:19:21.738165 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:19:21.738174 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:19:21.738184 kernel: audit: type=1403 audit(1746663561.191:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:19:21.738199 systemd[1]: Successfully loaded SELinux policy in 31.489ms. May 8 00:19:21.738215 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.602ms. May 8 00:19:21.738227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:19:21.738240 systemd[1]: Detected virtualization kvm. May 8 00:19:21.738251 systemd[1]: Detected architecture arm64. May 8 00:19:21.738261 systemd[1]: Detected first boot. May 8 00:19:21.738272 systemd[1]: Initializing machine ID from VM UUID. May 8 00:19:21.738296 zram_generator::config[1045]: No configuration found. May 8 00:19:21.738325 systemd[1]: Populated /etc with preset unit settings. May 8 00:19:21.738335 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:19:21.738346 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:19:21.738358 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:19:21.738369 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:19:21.738381 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:19:21.738392 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:19:21.738403 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:19:21.738414 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:19:21.738424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:19:21.738435 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:19:21.738445 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:19:21.738457 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:19:21.738468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:19:21.738479 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:19:21.738491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:19:21.738502 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:19:21.738512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:19:21.738523 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 00:19:21.738533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:19:21.738543 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:19:21.738555 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:19:21.738566 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:19:21.738576 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:19:21.738586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:19:21.738597 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:19:21.738608 systemd[1]: Reached target slices.target - Slice Units. May 8 00:19:21.738619 systemd[1]: Reached target swap.target - Swaps. May 8 00:19:21.738630 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:19:21.738641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:19:21.738652 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:19:21.738663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:19:21.738673 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:19:21.738684 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:19:21.738695 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:19:21.738705 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:19:21.738715 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:19:21.738727 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:19:21.738745 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:19:21.738757 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:19:21.738770 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:19:21.738781 systemd[1]: Reached target machines.target - Containers. May 8 00:19:21.738792 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:19:21.738802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:19:21.738813 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:19:21.738823 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:19:21.738837 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:19:21.738848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:19:21.738859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:19:21.738869 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:19:21.738880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:19:21.738891 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:19:21.738902 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:19:21.738912 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:19:21.738924 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:19:21.738934 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:19:21.738944 kernel: loop: module loaded May 8 00:19:21.738954 kernel: fuse: init (API version 7.39) May 8 00:19:21.738964 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:19:21.738974 kernel: ACPI: bus type drm_connector registered May 8 00:19:21.738984 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:19:21.738995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:19:21.739006 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:19:21.739016 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:19:21.739028 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:19:21.739039 systemd[1]: Stopped verity-setup.service. May 8 00:19:21.739071 systemd-journald[1112]: Collecting audit messages is disabled. May 8 00:19:21.739095 systemd-journald[1112]: Journal started May 8 00:19:21.739117 systemd-journald[1112]: Runtime Journal (/run/log/journal/53d52480543547f1b4006afc759388cf) is 5.9M, max 47.3M, 41.4M free. May 8 00:19:21.739166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:19:21.544231 systemd[1]: Queued start job for default target multi-user.target. May 8 00:19:21.570078 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:19:21.570411 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:19:21.741947 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:19:21.742509 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:19:21.743414 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:19:21.744250 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:19:21.745173 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:19:21.746111 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:19:21.748322 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:19:21.749403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:19:21.750559 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:19:21.750702 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:19:21.751838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:19:21.751972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:19:21.753051 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:19:21.753176 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:19:21.754221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:19:21.755432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:19:21.756601 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:19:21.756752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:19:21.757810 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:19:21.757950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:19:21.759054 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:19:21.760194 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:19:21.761458 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:19:21.774068 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:19:21.782398 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:19:21.784302 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:19:21.785106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:19:21.785145 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:19:21.786901 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:19:21.788822 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:19:21.790683 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:19:21.791550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:19:21.792906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:19:21.796456 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:19:21.797894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:19:21.799459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:19:21.800594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:19:21.802196 systemd-journald[1112]: Time spent on flushing to /var/log/journal/53d52480543547f1b4006afc759388cf is 19.081ms for 857 entries. May 8 00:19:21.802196 systemd-journald[1112]: System Journal (/var/log/journal/53d52480543547f1b4006afc759388cf) is 8.0M, max 195.6M, 187.6M free. May 8 00:19:21.826814 systemd-journald[1112]: Received client request to flush runtime journal. May 8 00:19:21.803528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:19:21.807558 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:19:21.812871 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:19:21.818457 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:19:21.821012 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:19:21.822589 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:19:21.824086 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:19:21.825873 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:19:21.830829 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:19:21.834098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:19:21.836298 kernel: loop0: detected capacity change from 0 to 114328 May 8 00:19:21.839474 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:19:21.845487 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:19:21.853792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:19:21.858177 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:19:21.861341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:19:21.863396 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:19:21.863980 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:19:21.864070 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 8 00:19:21.864089 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 8 00:19:21.867874 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:19:21.879307 kernel: loop1: detected capacity change from 0 to 114432 May 8 00:19:21.881499 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:19:21.904492 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:19:21.915312 kernel: loop2: detected capacity change from 0 to 201592 May 8 00:19:21.921622 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:19:21.934452 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 8 00:19:21.934472 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 8 00:19:21.938741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:19:21.939436 kernel: loop3: detected capacity change from 0 to 114328 May 8 00:19:21.949333 kernel: loop4: detected capacity change from 0 to 114432 May 8 00:19:21.958312 kernel: loop5: detected capacity change from 0 to 201592 May 8 00:19:21.964920 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:19:21.965326 (sd-merge)[1183]: Merged extensions into '/usr'. May 8 00:19:21.968814 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:19:21.968833 systemd[1]: Reloading... May 8 00:19:22.015825 zram_generator::config[1207]: No configuration found. May 8 00:19:22.114649 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:19:22.129334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:19:22.165353 systemd[1]: Reloading finished in 196 ms. May 8 00:19:22.203386 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:19:22.204535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:19:22.217445 systemd[1]: Starting ensure-sysext.service... May 8 00:19:22.219763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:19:22.228796 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... May 8 00:19:22.228816 systemd[1]: Reloading... May 8 00:19:22.236926 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:19:22.237186 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:19:22.237852 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:19:22.238066 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 8 00:19:22.238117 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 8 00:19:22.240543 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:19:22.240556 systemd-tmpfiles[1246]: Skipping /boot May 8 00:19:22.247644 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:19:22.247661 systemd-tmpfiles[1246]: Skipping /boot May 8 00:19:22.287311 zram_generator::config[1276]: No configuration found. May 8 00:19:22.367332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:19:22.402276 systemd[1]: Reloading finished in 173 ms. May 8 00:19:22.417344 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:19:22.424726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:19:22.432212 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:19:22.434506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:19:22.436661 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:19:22.441576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:19:22.445651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:19:22.451863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:19:22.458160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:19:22.459500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:19:22.467547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:19:22.473535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:19:22.474806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:19:22.480194 systemd-udevd[1320]: Using default interface naming scheme 'v255'. May 8 00:19:22.480558 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:19:22.484175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:19:22.486155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:19:22.486275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:19:22.488869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:19:22.488999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:19:22.494370 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:19:22.497477 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:19:22.501397 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:19:22.503362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:19:22.518366 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:19:22.521198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:19:22.534625 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:19:22.539345 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:19:22.542190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:19:22.544493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:19:22.546359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:19:22.550514 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:19:22.554532 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:19:22.556360 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:19:22.557219 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:19:22.565101 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:19:22.565440 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:19:22.573797 systemd[1]: Finished ensure-sysext.service. May 8 00:19:22.575054 augenrules[1363]: No rules May 8 00:19:22.577590 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:19:22.579070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:19:22.579201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:19:22.589570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1344) May 8 00:19:22.588397 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:19:22.590890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:19:22.591025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:19:22.594131 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:19:22.594433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:19:22.610508 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 00:19:22.622047 systemd-resolved[1314]: Positive Trust Anchors: May 8 00:19:22.622062 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:19:22.622094 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:19:22.624269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:19:22.630442 systemd-resolved[1314]: Defaulting to hostname 'linux'. May 8 00:19:22.632567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:19:22.633750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:19:22.633818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:19:22.643816 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:19:22.645411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:19:22.647466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:19:22.675193 systemd-networkd[1370]: lo: Link UP May 8 00:19:22.675208 systemd-networkd[1370]: lo: Gained carrier May 8 00:19:22.675765 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:19:22.676613 systemd-networkd[1370]: Enumeration completed May 8 00:19:22.677462 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:19:22.679273 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:19:22.679296 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:19:22.680630 systemd-networkd[1370]: eth0: Link UP May 8 00:19:22.680642 systemd-networkd[1370]: eth0: Gained carrier May 8 00:19:22.680657 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:19:22.680663 systemd[1]: Reached target network.target - Network. May 8 00:19:22.690448 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:19:22.692697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:22.696375 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:19:22.697765 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:19:22.700587 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:19:22.702965 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:19:22.704251 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:19:22.705892 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:19:22.705944 systemd-timesyncd[1389]: Initial clock synchronization to Thu 2025-05-08 00:19:22.722578 UTC. May 8 00:19:22.721691 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:19:22.738484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:22.759887 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:19:22.761441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:19:22.762574 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:19:22.763737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:19:22.764961 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:19:22.766399 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:19:22.767568 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:19:22.768840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:19:22.770226 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:19:22.770264 systemd[1]: Reached target paths.target - Path Units. May 8 00:19:22.771143 systemd[1]: Reached target timers.target - Timer Units. May 8 00:19:22.772896 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:19:22.775388 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:19:22.789421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:19:22.791856 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:19:22.793512 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:19:22.794755 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:19:22.795758 systemd[1]: Reached target basic.target - Basic System. May 8 00:19:22.796809 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:19:22.796838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:19:22.797898 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:19:22.802319 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:19:22.800003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:19:22.803485 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:19:22.806264 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:19:22.809839 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:19:22.811077 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:19:22.815980 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:19:22.817959 jq[1411]: false May 8 00:19:22.817915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:19:22.820791 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:19:22.829943 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:19:22.837842 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:19:22.838378 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:19:22.841107 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:19:22.843074 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:19:22.843895 dbus-daemon[1410]: [system] SELinux support is enabled May 8 00:19:22.844524 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:19:22.852560 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:19:22.855783 jq[1427]: true May 8 00:19:22.861783 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:19:22.863353 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:19:22.863683 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:19:22.863887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:19:22.868008 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:19:22.868213 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:19:22.877078 extend-filesystems[1412]: Found loop3 May 8 00:19:22.877078 extend-filesystems[1412]: Found loop4 May 8 00:19:22.878389 extend-filesystems[1412]: Found loop5 May 8 00:19:22.878389 extend-filesystems[1412]: Found vda May 8 00:19:22.878389 extend-filesystems[1412]: Found vda1 May 8 00:19:22.878389 extend-filesystems[1412]: Found vda2 May 8 00:19:22.878389 extend-filesystems[1412]: Found vda3 May 8 00:19:22.878389 extend-filesystems[1412]: Found usr May 8 00:19:22.878389 extend-filesystems[1412]: Found vda4 May 8 00:19:22.878389 extend-filesystems[1412]: Found vda6 May 8 00:19:22.878389 extend-filesystems[1412]: Found vda7 May 8 00:19:22.878389 extend-filesystems[1412]: Found vda9 May 8 00:19:22.878389 extend-filesystems[1412]: Checking size of /dev/vda9 May 8 00:19:22.899263 update_engine[1423]: I20250508 00:19:22.897058 1423 main.cc:92] Flatcar Update Engine starting May 8 00:19:22.899478 tar[1430]: linux-arm64/LICENSE May 8 00:19:22.899478 tar[1430]: linux-arm64/helm May 8 00:19:22.886060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:19:22.886114 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:19:22.899763 jq[1431]: true May 8 00:19:22.888472 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:19:22.888492 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:19:22.888514 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:19:22.889247 systemd-logind[1419]: New seat seat0. May 8 00:19:22.893296 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:19:22.899650 systemd[1]: Started update-engine.service - Update Engine. May 8 00:19:22.901933 update_engine[1423]: I20250508 00:19:22.901880 1423 update_check_scheduler.cc:74] Next update check in 9m5s May 8 00:19:22.905762 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:19:22.910553 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:19:22.910886 extend-filesystems[1412]: Resized partition /dev/vda9 May 8 00:19:22.915126 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) May 8 00:19:22.931124 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:19:22.931194 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1349) May 8 00:19:22.967300 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:19:22.978388 locksmithd[1449]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:19:22.979908 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:19:22.979908 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:19:22.979908 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:19:22.982859 extend-filesystems[1412]: Resized filesystem in /dev/vda9 May 8 00:19:22.982066 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:19:22.983599 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:19:22.987290 bash[1464]: Updated "/home/core/.ssh/authorized_keys" May 8 00:19:22.988052 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:19:22.989900 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:19:23.119986 containerd[1440]: time="2025-05-08T00:19:23.119892803Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:19:23.151764 containerd[1440]: time="2025-05-08T00:19:23.151595037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.153236 containerd[1440]: time="2025-05-08T00:19:23.153199797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:19:23.153236 containerd[1440]: time="2025-05-08T00:19:23.153233753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:19:23.153350 containerd[1440]: time="2025-05-08T00:19:23.153250650Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:19:23.153505 containerd[1440]: time="2025-05-08T00:19:23.153456546Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:19:23.153505 containerd[1440]: time="2025-05-08T00:19:23.153483775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.153572 containerd[1440]: time="2025-05-08T00:19:23.153553528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:19:23.153596 containerd[1440]: time="2025-05-08T00:19:23.153570185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.153754 containerd[1440]: time="2025-05-08T00:19:23.153733876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:19:23.153784 containerd[1440]: time="2025-05-08T00:19:23.153754378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.153784 containerd[1440]: time="2025-05-08T00:19:23.153767912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:19:23.153784 containerd[1440]: time="2025-05-08T00:19:23.153778363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.153899 containerd[1440]: time="2025-05-08T00:19:23.153849798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.154109 containerd[1440]: time="2025-05-08T00:19:23.154075634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:19:23.154215 containerd[1440]: time="2025-05-08T00:19:23.154195400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:19:23.154243 containerd[1440]: time="2025-05-08T00:19:23.154214940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:19:23.154342 containerd[1440]: time="2025-05-08T00:19:23.154325816Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:19:23.154407 containerd[1440]: time="2025-05-08T00:19:23.154376589Z" level=info msg="metadata content store policy set" policy=shared May 8 00:19:23.160857 containerd[1440]: time="2025-05-08T00:19:23.160810886Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:19:23.160970 containerd[1440]: time="2025-05-08T00:19:23.160879598Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:19:23.160970 containerd[1440]: time="2025-05-08T00:19:23.160911472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:19:23.160970 containerd[1440]: time="2025-05-08T00:19:23.160934416Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:19:23.160970 containerd[1440]: time="2025-05-08T00:19:23.160949632Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:19:23.161153 containerd[1440]: time="2025-05-08T00:19:23.161131142Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:19:23.161408 containerd[1440]: time="2025-05-08T00:19:23.161390173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:19:23.161516 containerd[1440]: time="2025-05-08T00:19:23.161498246Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:19:23.161558 containerd[1440]: time="2025-05-08T00:19:23.161519549Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:19:23.161558 containerd[1440]: time="2025-05-08T00:19:23.161534084Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:19:23.161558 containerd[1440]: time="2025-05-08T00:19:23.161548819Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161610 containerd[1440]: time="2025-05-08T00:19:23.161563314Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161610 containerd[1440]: time="2025-05-08T00:19:23.161576728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161610 containerd[1440]: time="2025-05-08T00:19:23.161591264Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161610 containerd[1440]: time="2025-05-08T00:19:23.161605679Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161685 containerd[1440]: time="2025-05-08T00:19:23.161623738Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161685 containerd[1440]: time="2025-05-08T00:19:23.161637032Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161685 containerd[1440]: time="2025-05-08T00:19:23.161649364Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:19:23.161685 containerd[1440]: time="2025-05-08T00:19:23.161674431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161685 containerd[1440]: time="2025-05-08T00:19:23.161689086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161779 containerd[1440]: time="2025-05-08T00:19:23.161701579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161779 containerd[1440]: time="2025-05-08T00:19:23.161714553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161779 containerd[1440]: time="2025-05-08T00:19:23.161727366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161906 containerd[1440]: time="2025-05-08T00:19:23.161881528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161949 containerd[1440]: time="2025-05-08T00:19:23.161906954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161949 containerd[1440]: time="2025-05-08T00:19:23.161922490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:19:23.161949 containerd[1440]: time="2025-05-08T00:19:23.161935544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162003 containerd[1440]: time="2025-05-08T00:19:23.161949879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162003 containerd[1440]: time="2025-05-08T00:19:23.161962973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162003 containerd[1440]: time="2025-05-08T00:19:23.161979750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162003 containerd[1440]: time="2025-05-08T00:19:23.161997849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162072 containerd[1440]: time="2025-05-08T00:19:23.162015428Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:19:23.162072 containerd[1440]: time="2025-05-08T00:19:23.162041655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162072 containerd[1440]: time="2025-05-08T00:19:23.162054589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:19:23.162072 containerd[1440]: time="2025-05-08T00:19:23.162066081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:19:23.163012 containerd[1440]: time="2025-05-08T00:19:23.162957454Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:19:23.163149 containerd[1440]: time="2025-05-08T00:19:23.163131356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:19:23.163231 containerd[1440]: time="2025-05-08T00:19:23.163148734Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:19:23.163231 containerd[1440]: time="2025-05-08T00:19:23.163162869Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:19:23.163231 containerd[1440]: time="2025-05-08T00:19:23.163172639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:19:23.163231 containerd[1440]: time="2025-05-08T00:19:23.163185493Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:19:23.163231 containerd[1440]: time="2025-05-08T00:19:23.163195864Z" level=info msg="NRI interface is disabled by configuration." May 8 00:19:23.163231 containerd[1440]: time="2025-05-08T00:19:23.163207276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:19:23.163632 containerd[1440]: time="2025-05-08T00:19:23.163573900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:19:23.163755 containerd[1440]: time="2025-05-08T00:19:23.163637126Z" level=info msg="Connect containerd service" May 8 00:19:23.163755 containerd[1440]: time="2025-05-08T00:19:23.163667558Z" level=info msg="using legacy CRI server" May 8 00:19:23.163755 containerd[1440]: time="2025-05-08T00:19:23.163674445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:19:23.163820 containerd[1440]: time="2025-05-08T00:19:23.163764900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:19:23.164684 containerd[1440]: time="2025-05-08T00:19:23.164652869Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:19:23.164884 containerd[1440]: time="2025-05-08T00:19:23.164848915Z" level=info msg="Start subscribing containerd event" May 8 00:19:23.164926 containerd[1440]: time="2025-05-08T00:19:23.164913622Z" level=info msg="Start recovering state" May 8 00:19:23.165112 containerd[1440]: time="2025-05-08T00:19:23.165078915Z" level=info msg="Start event monitor" May 8 00:19:23.165112 containerd[1440]: time="2025-05-08T00:19:23.165104702Z" level=info msg="Start snapshots syncer" May 8 00:19:23.165112 containerd[1440]: time="2025-05-08T00:19:23.165114513Z" level=info msg="Start cni network conf syncer for default" May 8 00:19:23.165183 containerd[1440]: time="2025-05-08T00:19:23.165121840Z" level=info msg="Start streaming server" May 8 00:19:23.165710 containerd[1440]: time="2025-05-08T00:19:23.165686552Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:19:23.165780 containerd[1440]: time="2025-05-08T00:19:23.165734802Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:19:23.165894 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:19:23.168390 containerd[1440]: time="2025-05-08T00:19:23.167349533Z" level=info msg="containerd successfully booted in 0.048556s" May 8 00:19:23.315190 tar[1430]: linux-arm64/README.md May 8 00:19:23.327335 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:19:23.342536 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:19:23.363386 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:19:23.373592 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:19:23.379719 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:19:23.379903 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:19:23.382516 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:19:23.396024 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:19:23.398652 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:19:23.400586 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 00:19:23.401710 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:19:23.872489 systemd-networkd[1370]: eth0: Gained IPv6LL May 8 00:19:23.875232 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:19:23.876832 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:19:23.890614 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:19:23.892862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:23.894643 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:19:23.915531 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:19:23.916943 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:19:23.917106 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:19:23.919560 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:19:24.406897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:24.408228 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:19:24.411854 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:19:24.412352 systemd[1]: Startup finished in 532ms (kernel) + 5.489s (initrd) + 3.254s (userspace) = 9.276s. May 8 00:19:24.808953 kubelet[1524]: E0508 00:19:24.808789 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:19:24.811314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:19:24.811462 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:19:28.337877 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:19:28.338972 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:53950.service - OpenSSH per-connection server daemon (10.0.0.1:53950). May 8 00:19:28.428288 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 53950 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:28.429985 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:28.439054 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:19:28.447550 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:19:28.449383 systemd-logind[1419]: New session 1 of user core. May 8 00:19:28.456712 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:19:28.459347 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:19:28.466471 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:19:28.539611 systemd[1541]: Queued start job for default target default.target. May 8 00:19:28.549175 systemd[1541]: Created slice app.slice - User Application Slice. May 8 00:19:28.549204 systemd[1541]: Reached target paths.target - Paths. May 8 00:19:28.549216 systemd[1541]: Reached target timers.target - Timers. May 8 00:19:28.550462 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:19:28.559897 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:19:28.559957 systemd[1541]: Reached target sockets.target - Sockets. May 8 00:19:28.559969 systemd[1541]: Reached target basic.target - Basic System. May 8 00:19:28.560003 systemd[1541]: Reached target default.target - Main User Target. May 8 00:19:28.560032 systemd[1541]: Startup finished in 87ms. May 8 00:19:28.560310 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:19:28.561496 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:19:28.629478 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). May 8 00:19:28.663976 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:28.665366 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:28.669236 systemd-logind[1419]: New session 2 of user core. May 8 00:19:28.678425 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:19:28.730134 sshd[1552]: pam_unix(sshd:session): session closed for user core May 8 00:19:28.739437 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:53966.service: Deactivated successfully. May 8 00:19:28.740735 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:19:28.743473 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. May 8 00:19:28.752518 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). May 8 00:19:28.753316 systemd-logind[1419]: Removed session 2. May 8 00:19:28.782692 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:28.783854 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:28.787159 systemd-logind[1419]: New session 3 of user core. May 8 00:19:28.801438 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:19:28.848804 sshd[1559]: pam_unix(sshd:session): session closed for user core May 8 00:19:28.857599 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:53980.service: Deactivated successfully. May 8 00:19:28.858947 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:19:28.861397 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. May 8 00:19:28.862460 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). May 8 00:19:28.863154 systemd-logind[1419]: Removed session 3. May 8 00:19:28.896487 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:28.897926 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:28.901342 systemd-logind[1419]: New session 4 of user core. May 8 00:19:28.910424 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:19:28.963007 sshd[1566]: pam_unix(sshd:session): session closed for user core May 8 00:19:28.978825 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:53990.service: Deactivated successfully. May 8 00:19:28.980220 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:19:28.982340 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. May 8 00:19:28.983412 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:53998.service - OpenSSH per-connection server daemon (10.0.0.1:53998). May 8 00:19:28.984171 systemd-logind[1419]: Removed session 4. May 8 00:19:29.017271 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 53998 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:29.018569 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:29.022557 systemd-logind[1419]: New session 5 of user core. May 8 00:19:29.037451 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:19:29.099659 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:19:29.099928 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:19:29.115131 sudo[1576]: pam_unix(sudo:session): session closed for user root May 8 00:19:29.118686 sshd[1573]: pam_unix(sshd:session): session closed for user core May 8 00:19:29.127892 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:53998.service: Deactivated successfully. May 8 00:19:29.130551 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:19:29.131860 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. May 8 00:19:29.133168 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:54008.service - OpenSSH per-connection server daemon (10.0.0.1:54008). May 8 00:19:29.133852 systemd-logind[1419]: Removed session 5. May 8 00:19:29.167742 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 54008 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:29.169244 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:29.173263 systemd-logind[1419]: New session 6 of user core. May 8 00:19:29.181430 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:19:29.233881 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:19:29.234156 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:19:29.237231 sudo[1585]: pam_unix(sudo:session): session closed for user root May 8 00:19:29.241848 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:19:29.242110 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:19:29.266690 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:19:29.267829 auditctl[1588]: No rules May 8 00:19:29.268662 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:19:29.268857 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:19:29.270604 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:19:29.293928 augenrules[1606]: No rules May 8 00:19:29.297392 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:19:29.298637 sudo[1584]: pam_unix(sudo:session): session closed for user root May 8 00:19:29.301640 sshd[1581]: pam_unix(sshd:session): session closed for user core May 8 00:19:29.315615 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:54008.service: Deactivated successfully. May 8 00:19:29.317085 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:19:29.320466 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. May 8 00:19:29.327542 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:54022.service - OpenSSH per-connection server daemon (10.0.0.1:54022). May 8 00:19:29.328347 systemd-logind[1419]: Removed session 6. May 8 00:19:29.357653 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 54022 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:19:29.358796 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:19:29.362153 systemd-logind[1419]: New session 7 of user core. May 8 00:19:29.374433 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:19:29.423569 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:19:29.423834 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:19:29.734527 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:19:29.734713 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:19:29.999439 dockerd[1635]: time="2025-05-08T00:19:29.999307927Z" level=info msg="Starting up" May 8 00:19:30.148497 dockerd[1635]: time="2025-05-08T00:19:30.148454168Z" level=info msg="Loading containers: start." May 8 00:19:30.248466 kernel: Initializing XFRM netlink socket May 8 00:19:30.320691 systemd-networkd[1370]: docker0: Link UP May 8 00:19:30.347609 dockerd[1635]: time="2025-05-08T00:19:30.347567559Z" level=info msg="Loading containers: done." May 8 00:19:30.360524 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2622153748-merged.mount: Deactivated successfully. May 8 00:19:30.362051 dockerd[1635]: time="2025-05-08T00:19:30.361992496Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:19:30.362130 dockerd[1635]: time="2025-05-08T00:19:30.362114898Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:19:30.362243 dockerd[1635]: time="2025-05-08T00:19:30.362227133Z" level=info msg="Daemon has completed initialization" May 8 00:19:30.393271 dockerd[1635]: time="2025-05-08T00:19:30.393106440Z" level=info msg="API listen on /run/docker.sock" May 8 00:19:30.393555 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:19:31.197782 containerd[1440]: time="2025-05-08T00:19:31.197722025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:19:32.044933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036886012.mount: Deactivated successfully. May 8 00:19:33.514392 containerd[1440]: time="2025-05-08T00:19:33.514331248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:33.515990 containerd[1440]: time="2025-05-08T00:19:33.515958021Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 8 00:19:33.517016 containerd[1440]: time="2025-05-08T00:19:33.516971858Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:33.519482 containerd[1440]: time="2025-05-08T00:19:33.519428488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:33.520639 containerd[1440]: time="2025-05-08T00:19:33.520597250Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.32281855s" May 8 00:19:33.520696 containerd[1440]: time="2025-05-08T00:19:33.520642515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 8 00:19:33.521715 containerd[1440]: time="2025-05-08T00:19:33.521672481Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:19:35.061685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:19:35.075674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:35.169462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:35.173237 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:19:35.211065 kubelet[1847]: E0508 00:19:35.211011 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:19:35.215540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:19:35.215699 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:19:35.435957 containerd[1440]: time="2025-05-08T00:19:35.435711939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:35.436883 containerd[1440]: time="2025-05-08T00:19:35.436628222Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 8 00:19:35.437688 containerd[1440]: time="2025-05-08T00:19:35.437623982Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:35.440576 containerd[1440]: time="2025-05-08T00:19:35.440524343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:35.441763 containerd[1440]: time="2025-05-08T00:19:35.441734047Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.920025347s" May 8 00:19:35.441809 containerd[1440]: time="2025-05-08T00:19:35.441770465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 8 00:19:35.442242 containerd[1440]: time="2025-05-08T00:19:35.442211318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:19:37.095904 containerd[1440]: time="2025-05-08T00:19:37.095859901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:37.098956 containerd[1440]: time="2025-05-08T00:19:37.098915518Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 8 00:19:37.100766 containerd[1440]: time="2025-05-08T00:19:37.100701516Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:37.103067 containerd[1440]: time="2025-05-08T00:19:37.103040749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:37.105241 containerd[1440]: time="2025-05-08T00:19:37.105121633Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.6628777s" May 8 00:19:37.105241 containerd[1440]: time="2025-05-08T00:19:37.105158608Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 8 00:19:37.105623 containerd[1440]: time="2025-05-08T00:19:37.105589751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:19:38.375273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321328001.mount: Deactivated successfully. May 8 00:19:38.758547 containerd[1440]: time="2025-05-08T00:19:38.758369702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:38.759297 containerd[1440]: time="2025-05-08T00:19:38.759066020Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 8 00:19:38.759993 containerd[1440]: time="2025-05-08T00:19:38.759933205Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:38.761673 containerd[1440]: time="2025-05-08T00:19:38.761627479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:38.762477 containerd[1440]: time="2025-05-08T00:19:38.762397946Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.656777101s" May 8 00:19:38.762477 containerd[1440]: time="2025-05-08T00:19:38.762430518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 8 00:19:38.762948 containerd[1440]: time="2025-05-08T00:19:38.762926596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:19:39.311942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295176304.mount: Deactivated successfully. May 8 00:19:40.294161 containerd[1440]: time="2025-05-08T00:19:40.294113139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:40.294658 containerd[1440]: time="2025-05-08T00:19:40.294622757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 8 00:19:40.295378 containerd[1440]: time="2025-05-08T00:19:40.295342369Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:40.298734 containerd[1440]: time="2025-05-08T00:19:40.298682137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:40.300392 containerd[1440]: time="2025-05-08T00:19:40.300327312Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.537369665s" May 8 00:19:40.300392 containerd[1440]: time="2025-05-08T00:19:40.300363285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 8 00:19:40.300911 containerd[1440]: time="2025-05-08T00:19:40.300859499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:19:40.781230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998102272.mount: Deactivated successfully. May 8 00:19:40.785569 containerd[1440]: time="2025-05-08T00:19:40.784808372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:40.785902 containerd[1440]: time="2025-05-08T00:19:40.785876305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 8 00:19:40.786712 containerd[1440]: time="2025-05-08T00:19:40.786670703Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:40.789316 containerd[1440]: time="2025-05-08T00:19:40.789250365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:40.790304 containerd[1440]: time="2025-05-08T00:19:40.790128152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 489.239323ms" May 8 00:19:40.790304 containerd[1440]: time="2025-05-08T00:19:40.790158243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 00:19:40.791100 containerd[1440]: time="2025-05-08T00:19:40.790990454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:19:41.345998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255313045.mount: Deactivated successfully. May 8 00:19:44.512206 containerd[1440]: time="2025-05-08T00:19:44.512146689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:44.512636 containerd[1440]: time="2025-05-08T00:19:44.512596091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 8 00:19:44.513680 containerd[1440]: time="2025-05-08T00:19:44.513641573Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:44.517212 containerd[1440]: time="2025-05-08T00:19:44.517161805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:19:44.518560 containerd[1440]: time="2025-05-08T00:19:44.518423025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.727325335s" May 8 00:19:44.518560 containerd[1440]: time="2025-05-08T00:19:44.518455114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 8 00:19:45.323886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:19:45.333487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:45.430021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:45.434543 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:19:45.468209 kubelet[2009]: E0508 00:19:45.468143 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:19:45.470982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:19:45.471124 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:19:49.134797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:49.149537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:49.175062 systemd[1]: Reloading requested from client PID 2025 ('systemctl') (unit session-7.scope)... May 8 00:19:49.175084 systemd[1]: Reloading... May 8 00:19:49.247306 zram_generator::config[2064]: No configuration found. May 8 00:19:49.527968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:19:49.579997 systemd[1]: Reloading finished in 404 ms. May 8 00:19:49.616909 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:49.618945 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:19:49.619179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:49.620725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:49.715726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:49.719104 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:19:49.755195 kubelet[2111]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:19:49.755195 kubelet[2111]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:19:49.755195 kubelet[2111]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:19:49.755547 kubelet[2111]: I0508 00:19:49.755241 2111 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:19:51.081791 kubelet[2111]: I0508 00:19:51.081739 2111 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:19:51.081791 kubelet[2111]: I0508 00:19:51.081775 2111 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:19:51.083797 kubelet[2111]: I0508 00:19:51.083768 2111 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:19:51.111599 kubelet[2111]: E0508 00:19:51.111540 2111 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:51.113320 kubelet[2111]: I0508 00:19:51.113299 2111 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:19:51.120286 kubelet[2111]: E0508 00:19:51.120231 2111 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:19:51.120286 kubelet[2111]: I0508 00:19:51.120264 2111 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:19:51.123172 kubelet[2111]: I0508 00:19:51.123125 2111 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:19:51.123836 kubelet[2111]: I0508 00:19:51.123782 2111 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:19:51.124006 kubelet[2111]: I0508 00:19:51.123832 2111 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:19:51.124087 kubelet[2111]: I0508 00:19:51.124069 2111 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:19:51.124087 kubelet[2111]: I0508 00:19:51.124078 2111 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:19:51.124299 kubelet[2111]: I0508 00:19:51.124269 2111 state_mem.go:36] "Initialized new in-memory state store" May 8 00:19:51.126757 kubelet[2111]: I0508 00:19:51.126715 2111 kubelet.go:446] "Attempting to sync node with API server" May 8 00:19:51.126757 kubelet[2111]: I0508 00:19:51.126742 2111 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:19:51.126839 kubelet[2111]: I0508 00:19:51.126762 2111 kubelet.go:352] "Adding apiserver pod source" May 8 00:19:51.126839 kubelet[2111]: I0508 00:19:51.126772 2111 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:19:51.129691 kubelet[2111]: W0508 00:19:51.129547 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:51.129691 kubelet[2111]: E0508 00:19:51.129621 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:51.129691 kubelet[2111]: W0508 00:19:51.129624 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:51.129691 kubelet[2111]: E0508 00:19:51.129669 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:51.131590 kubelet[2111]: I0508 00:19:51.131563 2111 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:19:51.132153 kubelet[2111]: I0508 00:19:51.132124 2111 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:19:51.132299 kubelet[2111]: W0508 00:19:51.132275 2111 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:19:51.133150 kubelet[2111]: I0508 00:19:51.133129 2111 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:19:51.133181 kubelet[2111]: I0508 00:19:51.133165 2111 server.go:1287] "Started kubelet" May 8 00:19:51.134784 kubelet[2111]: I0508 00:19:51.134362 2111 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:19:51.135287 kubelet[2111]: I0508 00:19:51.135218 2111 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:19:51.135669 kubelet[2111]: I0508 00:19:51.135513 2111 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:19:51.135669 kubelet[2111]: I0508 00:19:51.135572 2111 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:19:51.137416 kubelet[2111]: I0508 00:19:51.136719 2111 server.go:490] "Adding debug handlers to kubelet server" May 8 00:19:51.137556 kubelet[2111]: E0508 00:19:51.137097 2111 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d65482b9963a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:19:51.13314397 +0000 UTC m=+1.411179488,LastTimestamp:2025-05-08 00:19:51.13314397 +0000 UTC m=+1.411179488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:19:51.137876 kubelet[2111]: I0508 00:19:51.137791 2111 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:19:51.137990 kubelet[2111]: E0508 00:19:51.137973 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:51.138074 kubelet[2111]: I0508 00:19:51.138064 2111 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:19:51.138666 kubelet[2111]: I0508 00:19:51.138643 2111 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:19:51.139211 kubelet[2111]: I0508 00:19:51.139187 2111 factory.go:221] Registration of the systemd container factory successfully May 8 00:19:51.139313 kubelet[2111]: I0508 00:19:51.139293 2111 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:19:51.139464 kubelet[2111]: I0508 00:19:51.139441 2111 reconciler.go:26] "Reconciler: start to sync state" May 8 00:19:51.139959 kubelet[2111]: E0508 00:19:51.139662 2111 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:19:51.139959 kubelet[2111]: E0508 00:19:51.139799 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" May 8 00:19:51.140468 kubelet[2111]: W0508 00:19:51.140252 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:51.140623 kubelet[2111]: E0508 00:19:51.140581 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:51.140708 kubelet[2111]: I0508 00:19:51.140684 2111 factory.go:221] Registration of the containerd container factory successfully May 8 00:19:51.150361 kubelet[2111]: I0508 00:19:51.150298 2111 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:19:51.152607 kubelet[2111]: I0508 00:19:51.152422 2111 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:19:51.152607 kubelet[2111]: I0508 00:19:51.152500 2111 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:19:51.152607 kubelet[2111]: I0508 00:19:51.152521 2111 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:19:51.152607 kubelet[2111]: I0508 00:19:51.152528 2111 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:19:51.152733 kubelet[2111]: E0508 00:19:51.152613 2111 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:19:51.152733 kubelet[2111]: I0508 00:19:51.152671 2111 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:19:51.152733 kubelet[2111]: I0508 00:19:51.152683 2111 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:19:51.152733 kubelet[2111]: I0508 00:19:51.152697 2111 state_mem.go:36] "Initialized new in-memory state store" May 8 00:19:51.153329 kubelet[2111]: W0508 00:19:51.153225 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:51.153933 kubelet[2111]: E0508 00:19:51.153346 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:51.222126 kubelet[2111]: I0508 00:19:51.222021 2111 policy_none.go:49] "None policy: Start" May 8 00:19:51.222126 kubelet[2111]: I0508 00:19:51.222051 2111 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:19:51.222126 kubelet[2111]: I0508 00:19:51.222063 2111 state_mem.go:35] "Initializing new in-memory state store" May 8 00:19:51.227680 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:19:51.238193 kubelet[2111]: E0508 00:19:51.238154 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:51.240044 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:19:51.242788 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:19:51.252834 kubelet[2111]: E0508 00:19:51.252786 2111 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:19:51.253051 kubelet[2111]: I0508 00:19:51.253021 2111 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:19:51.253435 kubelet[2111]: I0508 00:19:51.253410 2111 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:19:51.253483 kubelet[2111]: I0508 00:19:51.253429 2111 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:19:51.253690 kubelet[2111]: I0508 00:19:51.253665 2111 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:19:51.254690 kubelet[2111]: E0508 00:19:51.254668 2111 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:19:51.254757 kubelet[2111]: E0508 00:19:51.254710 2111 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:19:51.340725 kubelet[2111]: E0508 00:19:51.340600 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" May 8 00:19:51.354694 kubelet[2111]: I0508 00:19:51.354655 2111 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:19:51.355085 kubelet[2111]: E0508 00:19:51.355045 2111 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 8 00:19:51.462861 systemd[1]: Created slice kubepods-burstable-pod4a37f0b7816e2221b5417355bc39d33d.slice - libcontainer container kubepods-burstable-pod4a37f0b7816e2221b5417355bc39d33d.slice. May 8 00:19:51.482990 kubelet[2111]: E0508 00:19:51.482955 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:51.484598 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 00:19:51.495231 kubelet[2111]: E0508 00:19:51.495195 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:51.497498 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 00:19:51.498995 kubelet[2111]: E0508 00:19:51.498951 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:51.541347 kubelet[2111]: I0508 00:19:51.541314 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:51.541434 kubelet[2111]: I0508 00:19:51.541357 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a37f0b7816e2221b5417355bc39d33d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a37f0b7816e2221b5417355bc39d33d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:19:51.541434 kubelet[2111]: I0508 00:19:51.541380 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a37f0b7816e2221b5417355bc39d33d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a37f0b7816e2221b5417355bc39d33d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:19:51.541434 kubelet[2111]: I0508 00:19:51.541395 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:51.541434 kubelet[2111]: I0508 00:19:51.541435 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:51.541525 kubelet[2111]: I0508 00:19:51.541450 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a37f0b7816e2221b5417355bc39d33d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a37f0b7816e2221b5417355bc39d33d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:19:51.541525 kubelet[2111]: I0508 00:19:51.541464 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:51.541525 kubelet[2111]: I0508 00:19:51.541478 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:51.541525 kubelet[2111]: I0508 00:19:51.541496 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:19:51.556393 kubelet[2111]: I0508 00:19:51.556370 2111 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:19:51.556737 kubelet[2111]: E0508 00:19:51.556695 2111 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 8 00:19:51.741329 kubelet[2111]: E0508 00:19:51.741264 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" May 8 00:19:51.783658 kubelet[2111]: E0508 00:19:51.783623 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:51.784439 containerd[1440]: time="2025-05-08T00:19:51.784400831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a37f0b7816e2221b5417355bc39d33d,Namespace:kube-system,Attempt:0,}" May 8 00:19:51.796016 kubelet[2111]: E0508 00:19:51.795990 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:51.796612 containerd[1440]: time="2025-05-08T00:19:51.796475428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 00:19:51.800053 kubelet[2111]: E0508 00:19:51.799970 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:51.800356 containerd[1440]: time="2025-05-08T00:19:51.800326371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 00:19:51.958582 kubelet[2111]: I0508 00:19:51.958547 2111 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:19:51.958891 kubelet[2111]: E0508 00:19:51.958865 2111 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 8 00:19:52.257215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229037482.mount: Deactivated successfully. May 8 00:19:52.261914 containerd[1440]: time="2025-05-08T00:19:52.261832331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:19:52.263765 containerd[1440]: time="2025-05-08T00:19:52.263727677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:19:52.264458 containerd[1440]: time="2025-05-08T00:19:52.264428390Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:19:52.265929 containerd[1440]: time="2025-05-08T00:19:52.265823975Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:19:52.266724 containerd[1440]: time="2025-05-08T00:19:52.266687554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 00:19:52.267253 containerd[1440]: time="2025-05-08T00:19:52.267230402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:19:52.267419 containerd[1440]: time="2025-05-08T00:19:52.267391828Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:19:52.270274 containerd[1440]: time="2025-05-08T00:19:52.270239447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:19:52.271921 containerd[1440]: time="2025-05-08T00:19:52.271893074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.505212ms" May 8 00:19:52.272825 containerd[1440]: time="2025-05-08T00:19:52.272780897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.230976ms" May 8 00:19:52.275218 containerd[1440]: time="2025-05-08T00:19:52.275086029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.601143ms" May 8 00:19:52.393889 containerd[1440]: time="2025-05-08T00:19:52.393795812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:19:52.394052 containerd[1440]: time="2025-05-08T00:19:52.393872584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:19:52.394052 containerd[1440]: time="2025-05-08T00:19:52.393896268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:19:52.394052 containerd[1440]: time="2025-05-08T00:19:52.393988603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:19:52.394625 containerd[1440]: time="2025-05-08T00:19:52.394536291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:19:52.394625 containerd[1440]: time="2025-05-08T00:19:52.394593620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:19:52.394625 containerd[1440]: time="2025-05-08T00:19:52.394606142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:19:52.394846 containerd[1440]: time="2025-05-08T00:19:52.394684915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:19:52.398611 containerd[1440]: time="2025-05-08T00:19:52.398432119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:19:52.398776 containerd[1440]: time="2025-05-08T00:19:52.398735528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:19:52.399013 containerd[1440]: time="2025-05-08T00:19:52.398953763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:19:52.399291 containerd[1440]: time="2025-05-08T00:19:52.399249771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:19:52.413485 systemd[1]: Started cri-containerd-5ba5b137fb45e8d00e9e710cd65b1f058aff751e4b0bace47ead9a1f5263d490.scope - libcontainer container 5ba5b137fb45e8d00e9e710cd65b1f058aff751e4b0bace47ead9a1f5263d490. May 8 00:19:52.417630 systemd[1]: Started cri-containerd-49877986c4fcfc8ac409a6d653c80729ee69fed70e7c0c0d7e896977b2037920.scope - libcontainer container 49877986c4fcfc8ac409a6d653c80729ee69fed70e7c0c0d7e896977b2037920. May 8 00:19:52.419195 systemd[1]: Started cri-containerd-a885bfa0867ee0317686e5f43a8a11b9026eb24bed5a0516853c8f3c72f12aa7.scope - libcontainer container a885bfa0867ee0317686e5f43a8a11b9026eb24bed5a0516853c8f3c72f12aa7. May 8 00:19:52.448639 containerd[1440]: time="2025-05-08T00:19:52.448538279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ba5b137fb45e8d00e9e710cd65b1f058aff751e4b0bace47ead9a1f5263d490\"" May 8 00:19:52.450107 kubelet[2111]: E0508 00:19:52.450020 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:52.453008 containerd[1440]: time="2025-05-08T00:19:52.452922666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a37f0b7816e2221b5417355bc39d33d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a885bfa0867ee0317686e5f43a8a11b9026eb24bed5a0516853c8f3c72f12aa7\"" May 8 00:19:52.453116 containerd[1440]: time="2025-05-08T00:19:52.453089493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"49877986c4fcfc8ac409a6d653c80729ee69fed70e7c0c0d7e896977b2037920\"" May 8 00:19:52.453874 kubelet[2111]: E0508 00:19:52.453851 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:52.454833 kubelet[2111]: E0508 00:19:52.454796 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:52.456563 kubelet[2111]: W0508 00:19:52.456485 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:52.456619 kubelet[2111]: E0508 00:19:52.456570 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:52.456961 containerd[1440]: time="2025-05-08T00:19:52.456815734Z" level=info msg="CreateContainer within sandbox \"5ba5b137fb45e8d00e9e710cd65b1f058aff751e4b0bace47ead9a1f5263d490\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:19:52.457646 containerd[1440]: time="2025-05-08T00:19:52.457609942Z" level=info msg="CreateContainer within sandbox \"49877986c4fcfc8ac409a6d653c80729ee69fed70e7c0c0d7e896977b2037920\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:19:52.457960 containerd[1440]: time="2025-05-08T00:19:52.457932874Z" level=info msg="CreateContainer within sandbox \"a885bfa0867ee0317686e5f43a8a11b9026eb24bed5a0516853c8f3c72f12aa7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:19:52.476479 containerd[1440]: time="2025-05-08T00:19:52.476430937Z" level=info msg="CreateContainer within sandbox \"5ba5b137fb45e8d00e9e710cd65b1f058aff751e4b0bace47ead9a1f5263d490\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c2e3b563cca09889205d92876bff7f6e1c9c2463cb51d55d5c627a149dd05b8\"" May 8 00:19:52.477166 containerd[1440]: time="2025-05-08T00:19:52.477086563Z" level=info msg="StartContainer for \"8c2e3b563cca09889205d92876bff7f6e1c9c2463cb51d55d5c627a149dd05b8\"" May 8 00:19:52.482555 containerd[1440]: time="2025-05-08T00:19:52.482504877Z" level=info msg="CreateContainer within sandbox \"a885bfa0867ee0317686e5f43a8a11b9026eb24bed5a0516853c8f3c72f12aa7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4fb7f8bbd29654a9ed399dbe4b2d1534a7fa1132d06f5189d35682dd1a0780f\"" May 8 00:19:52.483307 containerd[1440]: time="2025-05-08T00:19:52.483154342Z" level=info msg="StartContainer for \"a4fb7f8bbd29654a9ed399dbe4b2d1534a7fa1132d06f5189d35682dd1a0780f\"" May 8 00:19:52.483989 containerd[1440]: time="2025-05-08T00:19:52.483949190Z" level=info msg="CreateContainer within sandbox \"49877986c4fcfc8ac409a6d653c80729ee69fed70e7c0c0d7e896977b2037920\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aef7f961073cddc32c7d683df5b01c863be1a799f6fdaa8f022430e7c5a59cbe\"" May 8 00:19:52.486160 containerd[1440]: time="2025-05-08T00:19:52.484586533Z" level=info msg="StartContainer for \"aef7f961073cddc32c7d683df5b01c863be1a799f6fdaa8f022430e7c5a59cbe\"" May 8 00:19:52.505456 systemd[1]: Started cri-containerd-8c2e3b563cca09889205d92876bff7f6e1c9c2463cb51d55d5c627a149dd05b8.scope - libcontainer container 8c2e3b563cca09889205d92876bff7f6e1c9c2463cb51d55d5c627a149dd05b8. May 8 00:19:52.517428 systemd[1]: Started cri-containerd-a4fb7f8bbd29654a9ed399dbe4b2d1534a7fa1132d06f5189d35682dd1a0780f.scope - libcontainer container a4fb7f8bbd29654a9ed399dbe4b2d1534a7fa1132d06f5189d35682dd1a0780f. May 8 00:19:52.518311 systemd[1]: Started cri-containerd-aef7f961073cddc32c7d683df5b01c863be1a799f6fdaa8f022430e7c5a59cbe.scope - libcontainer container aef7f961073cddc32c7d683df5b01c863be1a799f6fdaa8f022430e7c5a59cbe. May 8 00:19:52.539204 kubelet[2111]: W0508 00:19:52.539170 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:52.539323 kubelet[2111]: E0508 00:19:52.539299 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:52.543707 kubelet[2111]: E0508 00:19:52.543676 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" May 8 00:19:52.554127 containerd[1440]: time="2025-05-08T00:19:52.554011008Z" level=info msg="StartContainer for \"aef7f961073cddc32c7d683df5b01c863be1a799f6fdaa8f022430e7c5a59cbe\" returns successfully" May 8 00:19:52.554127 containerd[1440]: time="2025-05-08T00:19:52.554087740Z" level=info msg="StartContainer for \"8c2e3b563cca09889205d92876bff7f6e1c9c2463cb51d55d5c627a149dd05b8\" returns successfully" May 8 00:19:52.566738 kubelet[2111]: W0508 00:19:52.563717 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:52.566738 kubelet[2111]: E0508 00:19:52.563807 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:52.566870 containerd[1440]: time="2025-05-08T00:19:52.564292506Z" level=info msg="StartContainer for \"a4fb7f8bbd29654a9ed399dbe4b2d1534a7fa1132d06f5189d35682dd1a0780f\" returns successfully" May 8 00:19:52.699401 kubelet[2111]: W0508 00:19:52.699336 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 8 00:19:52.699401 kubelet[2111]: E0508 00:19:52.699407 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 8 00:19:52.760783 kubelet[2111]: I0508 00:19:52.760746 2111 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:19:53.162938 kubelet[2111]: E0508 00:19:53.162892 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:53.163043 kubelet[2111]: E0508 00:19:53.163034 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:53.166660 kubelet[2111]: E0508 00:19:53.165616 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:53.166660 kubelet[2111]: E0508 00:19:53.166133 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:53.168930 kubelet[2111]: E0508 00:19:53.168901 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:53.169024 kubelet[2111]: E0508 00:19:53.169004 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:54.008504 kubelet[2111]: I0508 00:19:54.008460 2111 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:19:54.008836 kubelet[2111]: E0508 00:19:54.008501 2111 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:19:54.013229 kubelet[2111]: E0508 00:19:54.013194 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.114132 kubelet[2111]: E0508 00:19:54.114088 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.171617 kubelet[2111]: E0508 00:19:54.171590 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:54.172183 kubelet[2111]: E0508 00:19:54.171684 2111 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:19:54.172183 kubelet[2111]: E0508 00:19:54.171907 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:54.172183 kubelet[2111]: E0508 00:19:54.171952 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:54.215190 kubelet[2111]: E0508 00:19:54.215150 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.316020 kubelet[2111]: E0508 00:19:54.315903 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.416365 kubelet[2111]: E0508 00:19:54.416324 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.516998 kubelet[2111]: E0508 00:19:54.516953 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.617150 kubelet[2111]: E0508 00:19:54.617029 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.718102 kubelet[2111]: E0508 00:19:54.718055 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.819096 kubelet[2111]: E0508 00:19:54.819047 2111 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:54.839527 kubelet[2111]: I0508 00:19:54.839480 2111 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:19:54.851820 kubelet[2111]: I0508 00:19:54.851319 2111 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:19:54.855435 kubelet[2111]: I0508 00:19:54.855408 2111 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:19:55.128313 kubelet[2111]: I0508 00:19:55.128275 2111 apiserver.go:52] "Watching apiserver" May 8 00:19:55.130620 kubelet[2111]: E0508 00:19:55.130587 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:55.139482 kubelet[2111]: I0508 00:19:55.139450 2111 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:19:55.171734 kubelet[2111]: I0508 00:19:55.171713 2111 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:19:55.172228 kubelet[2111]: E0508 00:19:55.172206 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:55.175569 kubelet[2111]: E0508 00:19:55.175425 2111 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:19:55.175982 kubelet[2111]: E0508 00:19:55.175819 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:55.679228 systemd[1]: Reloading requested from client PID 2392 ('systemctl') (unit session-7.scope)... May 8 00:19:55.679242 systemd[1]: Reloading... May 8 00:19:55.747406 zram_generator::config[2431]: No configuration found. May 8 00:19:55.829799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:19:55.893550 systemd[1]: Reloading finished in 213 ms. May 8 00:19:55.926989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:55.942365 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:19:55.942607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:55.942664 systemd[1]: kubelet.service: Consumed 1.745s CPU time, 122.9M memory peak, 0B memory swap peak. May 8 00:19:55.954682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:19:56.052175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:19:56.056375 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:19:56.095767 kubelet[2473]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:19:56.095767 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:19:56.095767 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:19:56.096243 kubelet[2473]: I0508 00:19:56.095813 2473 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:19:56.101413 kubelet[2473]: I0508 00:19:56.101371 2473 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:19:56.101413 kubelet[2473]: I0508 00:19:56.101404 2473 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:19:56.101656 kubelet[2473]: I0508 00:19:56.101631 2473 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:19:56.102881 kubelet[2473]: I0508 00:19:56.102858 2473 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:19:56.105027 kubelet[2473]: I0508 00:19:56.105001 2473 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:19:56.108134 kubelet[2473]: E0508 00:19:56.107702 2473 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:19:56.108134 kubelet[2473]: I0508 00:19:56.107736 2473 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:19:56.110783 kubelet[2473]: I0508 00:19:56.110761 2473 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:19:56.111011 kubelet[2473]: I0508 00:19:56.110988 2473 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:19:56.111293 kubelet[2473]: I0508 00:19:56.111011 2473 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:19:56.111375 kubelet[2473]: I0508 00:19:56.111327 2473 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:19:56.111375 kubelet[2473]: I0508 00:19:56.111342 2473 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:19:56.111422 kubelet[2473]: I0508 00:19:56.111389 2473 state_mem.go:36] "Initialized new in-memory state store" May 8 00:19:56.111587 kubelet[2473]: I0508 00:19:56.111555 2473 kubelet.go:446] "Attempting to sync node with API server" May 8 00:19:56.111587 kubelet[2473]: I0508 00:19:56.111572 2473 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:19:56.111587 kubelet[2473]: I0508 00:19:56.111588 2473 kubelet.go:352] "Adding apiserver pod source" May 8 00:19:56.111671 kubelet[2473]: I0508 00:19:56.111596 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:19:56.113022 kubelet[2473]: I0508 00:19:56.112719 2473 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:19:56.113407 kubelet[2473]: I0508 00:19:56.113146 2473 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:19:56.113600 kubelet[2473]: I0508 00:19:56.113576 2473 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:19:56.113645 kubelet[2473]: I0508 00:19:56.113614 2473 server.go:1287] "Started kubelet" May 8 00:19:56.114425 kubelet[2473]: I0508 00:19:56.114127 2473 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:19:56.114526 kubelet[2473]: I0508 00:19:56.114430 2473 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:19:56.114677 kubelet[2473]: I0508 00:19:56.114654 2473 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:19:56.115786 kubelet[2473]: I0508 00:19:56.115740 2473 server.go:490] "Adding debug handlers to kubelet server" May 8 00:19:56.116941 kubelet[2473]: I0508 00:19:56.116863 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:19:56.117022 kubelet[2473]: I0508 00:19:56.116949 2473 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:19:56.118090 kubelet[2473]: E0508 00:19:56.117937 2473 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:19:56.118333 kubelet[2473]: I0508 00:19:56.118223 2473 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:19:56.118597 kubelet[2473]: I0508 00:19:56.118241 2473 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:19:56.118791 kubelet[2473]: I0508 00:19:56.118777 2473 reconciler.go:26] "Reconciler: start to sync state" May 8 00:19:56.127365 kubelet[2473]: I0508 00:19:56.125762 2473 factory.go:221] Registration of the systemd container factory successfully May 8 00:19:56.127365 kubelet[2473]: I0508 00:19:56.125884 2473 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:19:56.136556 kubelet[2473]: E0508 00:19:56.136493 2473 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:19:56.141411 kubelet[2473]: I0508 00:19:56.141358 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:19:56.142115 kubelet[2473]: I0508 00:19:56.141873 2473 factory.go:221] Registration of the containerd container factory successfully May 8 00:19:56.147638 kubelet[2473]: I0508 00:19:56.147599 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:19:56.147638 kubelet[2473]: I0508 00:19:56.147633 2473 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:19:56.147748 kubelet[2473]: I0508 00:19:56.147659 2473 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:19:56.147748 kubelet[2473]: I0508 00:19:56.147667 2473 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:19:56.147748 kubelet[2473]: E0508 00:19:56.147706 2473 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:19:56.192362 kubelet[2473]: I0508 00:19:56.192253 2473 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:19:56.192867 kubelet[2473]: I0508 00:19:56.192801 2473 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:19:56.193500 kubelet[2473]: I0508 00:19:56.192937 2473 state_mem.go:36] "Initialized new in-memory state store" May 8 00:19:56.193500 kubelet[2473]: I0508 00:19:56.193096 2473 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:19:56.193500 kubelet[2473]: I0508 00:19:56.193107 2473 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:19:56.193500 kubelet[2473]: I0508 00:19:56.193125 2473 policy_none.go:49] "None policy: Start" May 8 00:19:56.193500 kubelet[2473]: I0508 00:19:56.193136 2473 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:19:56.193500 kubelet[2473]: I0508 00:19:56.193144 2473 state_mem.go:35] "Initializing new in-memory state store" May 8 00:19:56.193865 kubelet[2473]: I0508 00:19:56.193847 2473 state_mem.go:75] "Updated machine memory state" May 8 00:19:56.199656 kubelet[2473]: I0508 00:19:56.199626 2473 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:19:56.200172 kubelet[2473]: I0508 00:19:56.200007 2473 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:19:56.200352 kubelet[2473]: I0508 00:19:56.200044 2473 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:19:56.201043 kubelet[2473]: I0508 00:19:56.200850 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:19:56.202159 kubelet[2473]: E0508 00:19:56.201681 2473 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:19:56.248722 kubelet[2473]: I0508 00:19:56.248530 2473 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:19:56.248722 kubelet[2473]: I0508 00:19:56.248597 2473 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:19:56.248722 kubelet[2473]: I0508 00:19:56.248620 2473 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.254685 kubelet[2473]: E0508 00:19:56.254656 2473 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:19:56.254806 kubelet[2473]: E0508 00:19:56.254773 2473 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.254877 kubelet[2473]: E0508 00:19:56.254859 2473 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:19:56.305239 kubelet[2473]: I0508 00:19:56.305189 2473 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:19:56.312555 kubelet[2473]: I0508 00:19:56.311523 2473 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 00:19:56.312555 kubelet[2473]: I0508 00:19:56.311620 2473 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:19:56.420699 kubelet[2473]: I0508 00:19:56.420660 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a37f0b7816e2221b5417355bc39d33d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a37f0b7816e2221b5417355bc39d33d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:19:56.420699 kubelet[2473]: I0508 00:19:56.420702 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.420940 kubelet[2473]: I0508 00:19:56.420726 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.420940 kubelet[2473]: I0508 00:19:56.420754 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:19:56.420940 kubelet[2473]: I0508 00:19:56.420769 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a37f0b7816e2221b5417355bc39d33d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a37f0b7816e2221b5417355bc39d33d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:19:56.420940 kubelet[2473]: I0508 00:19:56.420784 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a37f0b7816e2221b5417355bc39d33d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a37f0b7816e2221b5417355bc39d33d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:19:56.420940 kubelet[2473]: I0508 00:19:56.420799 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.421047 kubelet[2473]: I0508 00:19:56.420814 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.421047 kubelet[2473]: I0508 00:19:56.420834 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:19:56.555994 kubelet[2473]: E0508 00:19:56.555840 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:56.555994 kubelet[2473]: E0508 00:19:56.555888 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:56.555994 kubelet[2473]: E0508 00:19:56.555956 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:56.677999 sudo[2509]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:19:56.678274 sudo[2509]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:19:57.107635 sudo[2509]: pam_unix(sudo:session): session closed for user root May 8 00:19:57.113371 kubelet[2473]: I0508 00:19:57.112314 2473 apiserver.go:52] "Watching apiserver" May 8 00:19:57.119310 kubelet[2473]: I0508 00:19:57.118941 2473 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:19:57.169525 kubelet[2473]: E0508 00:19:57.169475 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:57.169670 kubelet[2473]: I0508 00:19:57.169655 2473 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:19:57.170318 kubelet[2473]: I0508 00:19:57.170299 2473 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:19:57.178555 kubelet[2473]: E0508 00:19:57.178516 2473 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:19:57.178693 kubelet[2473]: E0508 00:19:57.178674 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:57.178755 kubelet[2473]: E0508 00:19:57.178673 2473 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:19:57.178834 kubelet[2473]: E0508 00:19:57.178822 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:57.199928 kubelet[2473]: I0508 00:19:57.199864 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.199845853 podStartE2EDuration="3.199845853s" podCreationTimestamp="2025-05-08 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:19:57.191758709 +0000 UTC m=+1.132233325" watchObservedRunningTime="2025-05-08 00:19:57.199845853 +0000 UTC m=+1.140320429" May 8 00:19:57.207765 kubelet[2473]: I0508 00:19:57.207698 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.207680808 podStartE2EDuration="3.207680808s" podCreationTimestamp="2025-05-08 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:19:57.200007032 +0000 UTC m=+1.140481648" watchObservedRunningTime="2025-05-08 00:19:57.207680808 +0000 UTC m=+1.148155424" May 8 00:19:57.216841 kubelet[2473]: I0508 00:19:57.216746 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.216716464 podStartE2EDuration="3.216716464s" podCreationTimestamp="2025-05-08 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:19:57.207855149 +0000 UTC m=+1.148329765" watchObservedRunningTime="2025-05-08 00:19:57.216716464 +0000 UTC m=+1.157191080" May 8 00:19:58.170985 kubelet[2473]: E0508 00:19:58.170959 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:58.171308 kubelet[2473]: E0508 00:19:58.171054 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:19:58.627759 sudo[1617]: pam_unix(sudo:session): session closed for user root May 8 00:19:58.629201 sshd[1614]: pam_unix(sshd:session): session closed for user core May 8 00:19:58.632337 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. May 8 00:19:58.632713 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:54022.service: Deactivated successfully. May 8 00:19:58.634468 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:19:58.634630 systemd[1]: session-7.scope: Consumed 6.680s CPU time, 154.8M memory peak, 0B memory swap peak. May 8 00:19:58.635190 systemd-logind[1419]: Removed session 7. May 8 00:19:59.173002 kubelet[2473]: E0508 00:19:59.172901 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:00.721676 kubelet[2473]: I0508 00:20:00.721645 2473 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:20:00.722085 containerd[1440]: time="2025-05-08T00:20:00.721933015Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:20:00.722396 kubelet[2473]: I0508 00:20:00.722380 2473 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:20:00.724751 kubelet[2473]: E0508 00:20:00.724732 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:00.979735 kubelet[2473]: E0508 00:20:00.979623 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:01.731669 systemd[1]: Created slice kubepods-besteffort-pod89e616bb_ebc6_4d0e_9a04_fe851a161c22.slice - libcontainer container kubepods-besteffort-pod89e616bb_ebc6_4d0e_9a04_fe851a161c22.slice. May 8 00:20:01.751742 systemd[1]: Created slice kubepods-burstable-pod62799f9d_162f_4b84_8843_4677bf722d37.slice - libcontainer container kubepods-burstable-pod62799f9d_162f_4b84_8843_4677bf722d37.slice. May 8 00:20:01.758043 kubelet[2473]: I0508 00:20:01.758002 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-cgroup\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758043 kubelet[2473]: I0508 00:20:01.758043 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-lib-modules\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758373 kubelet[2473]: I0508 00:20:01.758059 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62799f9d-162f-4b84-8843-4677bf722d37-clustermesh-secrets\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758373 kubelet[2473]: I0508 00:20:01.758076 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-hubble-tls\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758373 kubelet[2473]: I0508 00:20:01.758097 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rlp\" (UniqueName: \"kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-kube-api-access-79rlp\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758373 kubelet[2473]: I0508 00:20:01.758115 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqd4m\" (UniqueName: \"kubernetes.io/projected/89e616bb-ebc6-4d0e-9a04-fe851a161c22-kube-api-access-gqd4m\") pod \"kube-proxy-dvnnr\" (UID: \"89e616bb-ebc6-4d0e-9a04-fe851a161c22\") " pod="kube-system/kube-proxy-dvnnr" May 8 00:20:01.758373 kubelet[2473]: I0508 00:20:01.758131 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-bpf-maps\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758483 kubelet[2473]: I0508 00:20:01.758148 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89e616bb-ebc6-4d0e-9a04-fe851a161c22-xtables-lock\") pod \"kube-proxy-dvnnr\" (UID: \"89e616bb-ebc6-4d0e-9a04-fe851a161c22\") " pod="kube-system/kube-proxy-dvnnr" May 8 00:20:01.758483 kubelet[2473]: I0508 00:20:01.758163 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89e616bb-ebc6-4d0e-9a04-fe851a161c22-lib-modules\") pod \"kube-proxy-dvnnr\" (UID: \"89e616bb-ebc6-4d0e-9a04-fe851a161c22\") " pod="kube-system/kube-proxy-dvnnr" May 8 00:20:01.758483 kubelet[2473]: I0508 00:20:01.758179 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-run\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758483 kubelet[2473]: I0508 00:20:01.758193 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-hostproc\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758483 kubelet[2473]: I0508 00:20:01.758216 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-xtables-lock\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758483 kubelet[2473]: I0508 00:20:01.758231 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-net\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758602 kubelet[2473]: I0508 00:20:01.758248 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89e616bb-ebc6-4d0e-9a04-fe851a161c22-kube-proxy\") pod \"kube-proxy-dvnnr\" (UID: \"89e616bb-ebc6-4d0e-9a04-fe851a161c22\") " pod="kube-system/kube-proxy-dvnnr" May 8 00:20:01.758602 kubelet[2473]: I0508 00:20:01.758264 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cni-path\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758602 kubelet[2473]: I0508 00:20:01.758293 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-etc-cni-netd\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758602 kubelet[2473]: I0508 00:20:01.758311 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62799f9d-162f-4b84-8843-4677bf722d37-cilium-config-path\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.758602 kubelet[2473]: I0508 00:20:01.758328 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-kernel\") pod \"cilium-2z4lk\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " pod="kube-system/cilium-2z4lk" May 8 00:20:01.977389 systemd[1]: Created slice kubepods-besteffort-pod9b022244_201d_4461_9622_e9cadb32e96f.slice - libcontainer container kubepods-besteffort-pod9b022244_201d_4461_9622_e9cadb32e96f.slice. May 8 00:20:02.045518 kubelet[2473]: E0508 00:20:02.045336 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:02.046530 containerd[1440]: time="2025-05-08T00:20:02.046108517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvnnr,Uid:89e616bb-ebc6-4d0e-9a04-fe851a161c22,Namespace:kube-system,Attempt:0,}" May 8 00:20:02.055341 kubelet[2473]: E0508 00:20:02.055303 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:02.055991 containerd[1440]: time="2025-05-08T00:20:02.055698968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2z4lk,Uid:62799f9d-162f-4b84-8843-4677bf722d37,Namespace:kube-system,Attempt:0,}" May 8 00:20:02.060586 kubelet[2473]: I0508 00:20:02.060464 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfsq\" (UniqueName: \"kubernetes.io/projected/9b022244-201d-4461-9622-e9cadb32e96f-kube-api-access-2kfsq\") pod \"cilium-operator-6c4d7847fc-9fchb\" (UID: \"9b022244-201d-4461-9622-e9cadb32e96f\") " pod="kube-system/cilium-operator-6c4d7847fc-9fchb" May 8 00:20:02.060586 kubelet[2473]: I0508 00:20:02.060544 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b022244-201d-4461-9622-e9cadb32e96f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9fchb\" (UID: \"9b022244-201d-4461-9622-e9cadb32e96f\") " pod="kube-system/cilium-operator-6c4d7847fc-9fchb" May 8 00:20:02.108478 containerd[1440]: time="2025-05-08T00:20:02.108020153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:02.108478 containerd[1440]: time="2025-05-08T00:20:02.108307817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:02.108478 containerd[1440]: time="2025-05-08T00:20:02.108322339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:02.108863 containerd[1440]: time="2025-05-08T00:20:02.108492193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:02.108863 containerd[1440]: time="2025-05-08T00:20:02.108541317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:02.108863 containerd[1440]: time="2025-05-08T00:20:02.108556878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:02.108863 containerd[1440]: time="2025-05-08T00:20:02.108631525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:02.108863 containerd[1440]: time="2025-05-08T00:20:02.108415466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:02.135453 systemd[1]: Started cri-containerd-665408b91c511c08589feccbb4e550f417f52e893ecafc17373e7491c0c73e36.scope - libcontainer container 665408b91c511c08589feccbb4e550f417f52e893ecafc17373e7491c0c73e36. May 8 00:20:02.137712 systemd[1]: Started cri-containerd-01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524.scope - libcontainer container 01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524. May 8 00:20:02.167936 containerd[1440]: time="2025-05-08T00:20:02.167886777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2z4lk,Uid:62799f9d-162f-4b84-8843-4677bf722d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\"" May 8 00:20:02.168921 containerd[1440]: time="2025-05-08T00:20:02.168882021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvnnr,Uid:89e616bb-ebc6-4d0e-9a04-fe851a161c22,Namespace:kube-system,Attempt:0,} returns sandbox id \"665408b91c511c08589feccbb4e550f417f52e893ecafc17373e7491c0c73e36\"" May 8 00:20:02.169276 kubelet[2473]: E0508 00:20:02.169248 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:02.169683 kubelet[2473]: E0508 00:20:02.169656 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:02.172046 containerd[1440]: time="2025-05-08T00:20:02.171664656Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:20:02.172398 containerd[1440]: time="2025-05-08T00:20:02.172316911Z" level=info msg="CreateContainer within sandbox \"665408b91c511c08589feccbb4e550f417f52e893ecafc17373e7491c0c73e36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:20:02.196112 containerd[1440]: time="2025-05-08T00:20:02.196062600Z" level=info msg="CreateContainer within sandbox \"665408b91c511c08589feccbb4e550f417f52e893ecafc17373e7491c0c73e36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d43b782c104214df482c9f0ab461b9641d32ad45eee2dbb74397322186c9ee6\"" May 8 00:20:02.196868 containerd[1440]: time="2025-05-08T00:20:02.196828025Z" level=info msg="StartContainer for \"3d43b782c104214df482c9f0ab461b9641d32ad45eee2dbb74397322186c9ee6\"" May 8 00:20:02.219433 systemd[1]: Started cri-containerd-3d43b782c104214df482c9f0ab461b9641d32ad45eee2dbb74397322186c9ee6.scope - libcontainer container 3d43b782c104214df482c9f0ab461b9641d32ad45eee2dbb74397322186c9ee6. May 8 00:20:02.247573 containerd[1440]: time="2025-05-08T00:20:02.247529593Z" level=info msg="StartContainer for \"3d43b782c104214df482c9f0ab461b9641d32ad45eee2dbb74397322186c9ee6\" returns successfully" May 8 00:20:02.279930 kubelet[2473]: E0508 00:20:02.279891 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:02.281511 containerd[1440]: time="2025-05-08T00:20:02.281471264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9fchb,Uid:9b022244-201d-4461-9622-e9cadb32e96f,Namespace:kube-system,Attempt:0,}" May 8 00:20:02.304266 containerd[1440]: time="2025-05-08T00:20:02.304137861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:02.304266 containerd[1440]: time="2025-05-08T00:20:02.304187585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:02.304266 containerd[1440]: time="2025-05-08T00:20:02.304203306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:02.305018 containerd[1440]: time="2025-05-08T00:20:02.304435726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:02.321445 systemd[1]: Started cri-containerd-d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295.scope - libcontainer container d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295. May 8 00:20:02.355908 containerd[1440]: time="2025-05-08T00:20:02.355870636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9fchb,Uid:9b022244-201d-4461-9622-e9cadb32e96f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295\"" May 8 00:20:02.356940 kubelet[2473]: E0508 00:20:02.356872 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:03.184386 kubelet[2473]: E0508 00:20:03.184354 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:03.192494 kubelet[2473]: I0508 00:20:03.192435 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvnnr" podStartSLOduration=2.1924206760000002 podStartE2EDuration="2.192420676s" podCreationTimestamp="2025-05-08 00:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:03.192153535 +0000 UTC m=+7.132628191" watchObservedRunningTime="2025-05-08 00:20:03.192420676 +0000 UTC m=+7.132895292" May 8 00:20:04.229430 kubelet[2473]: E0508 00:20:04.229393 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:07.706257 kubelet[2473]: E0508 00:20:07.706228 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:08.137620 update_engine[1423]: I20250508 00:20:08.137480 1423 update_attempter.cc:509] Updating boot flags... May 8 00:20:08.168383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2848) May 8 00:20:08.207400 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2848) May 8 00:20:10.733351 kubelet[2473]: E0508 00:20:10.732980 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:10.988130 kubelet[2473]: E0508 00:20:10.988021 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:13.365742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677442742.mount: Deactivated successfully. May 8 00:20:14.622025 containerd[1440]: time="2025-05-08T00:20:14.621964982Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:14.622461 containerd[1440]: time="2025-05-08T00:20:14.622385959Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 00:20:14.623149 containerd[1440]: time="2025-05-08T00:20:14.623114387Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:14.626315 containerd[1440]: time="2025-05-08T00:20:14.626275070Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.454106252s" May 8 00:20:14.626369 containerd[1440]: time="2025-05-08T00:20:14.626318312Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 00:20:14.627321 containerd[1440]: time="2025-05-08T00:20:14.627271709Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:20:14.636103 containerd[1440]: time="2025-05-08T00:20:14.636062812Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:20:14.661646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262289491.mount: Deactivated successfully. May 8 00:20:14.666161 containerd[1440]: time="2025-05-08T00:20:14.666113664Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\"" May 8 00:20:14.666861 containerd[1440]: time="2025-05-08T00:20:14.666834732Z" level=info msg="StartContainer for \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\"" May 8 00:20:14.695507 systemd[1]: Started cri-containerd-20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e.scope - libcontainer container 20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e. May 8 00:20:14.721223 containerd[1440]: time="2025-05-08T00:20:14.721179571Z" level=info msg="StartContainer for \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\" returns successfully" May 8 00:20:14.766932 systemd[1]: cri-containerd-20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e.scope: Deactivated successfully. May 8 00:20:14.929408 containerd[1440]: time="2025-05-08T00:20:14.929261044Z" level=info msg="shim disconnected" id=20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e namespace=k8s.io May 8 00:20:14.929408 containerd[1440]: time="2025-05-08T00:20:14.929327966Z" level=warning msg="cleaning up after shim disconnected" id=20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e namespace=k8s.io May 8 00:20:14.929408 containerd[1440]: time="2025-05-08T00:20:14.929339007Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:20:15.254845 kubelet[2473]: E0508 00:20:15.254565 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:15.257540 containerd[1440]: time="2025-05-08T00:20:15.257419252Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:20:15.296443 containerd[1440]: time="2025-05-08T00:20:15.296384516Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\"" May 8 00:20:15.296930 containerd[1440]: time="2025-05-08T00:20:15.296904055Z" level=info msg="StartContainer for \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\"" May 8 00:20:15.327428 systemd[1]: Started cri-containerd-37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28.scope - libcontainer container 37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28. May 8 00:20:15.347559 containerd[1440]: time="2025-05-08T00:20:15.347502904Z" level=info msg="StartContainer for \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\" returns successfully" May 8 00:20:15.370458 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:20:15.370789 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:20:15.370857 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:20:15.379575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:20:15.379802 systemd[1]: cri-containerd-37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28.scope: Deactivated successfully. May 8 00:20:15.396554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:20:15.397686 containerd[1440]: time="2025-05-08T00:20:15.397625297Z" level=info msg="shim disconnected" id=37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28 namespace=k8s.io May 8 00:20:15.397770 containerd[1440]: time="2025-05-08T00:20:15.397685819Z" level=warning msg="cleaning up after shim disconnected" id=37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28 namespace=k8s.io May 8 00:20:15.397770 containerd[1440]: time="2025-05-08T00:20:15.397695299Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:20:15.655232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e-rootfs.mount: Deactivated successfully. May 8 00:20:15.768887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115767190.mount: Deactivated successfully. May 8 00:20:16.253226 kubelet[2473]: E0508 00:20:16.253191 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:16.255393 containerd[1440]: time="2025-05-08T00:20:16.255310705Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:20:16.259498 containerd[1440]: time="2025-05-08T00:20:16.259451327Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:16.260927 containerd[1440]: time="2025-05-08T00:20:16.260881856Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 00:20:16.263972 containerd[1440]: time="2025-05-08T00:20:16.263905879Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:16.266655 containerd[1440]: time="2025-05-08T00:20:16.266599732Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.63927586s" May 8 00:20:16.266655 containerd[1440]: time="2025-05-08T00:20:16.266647813Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 00:20:16.269219 containerd[1440]: time="2025-05-08T00:20:16.269185780Z" level=info msg="CreateContainer within sandbox \"d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:20:16.271009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779721910.mount: Deactivated successfully. May 8 00:20:16.276385 containerd[1440]: time="2025-05-08T00:20:16.276340585Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\"" May 8 00:20:16.278690 containerd[1440]: time="2025-05-08T00:20:16.278656185Z" level=info msg="StartContainer for \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\"" May 8 00:20:16.287339 containerd[1440]: time="2025-05-08T00:20:16.287216398Z" level=info msg="CreateContainer within sandbox \"d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\"" May 8 00:20:16.288470 containerd[1440]: time="2025-05-08T00:20:16.288436440Z" level=info msg="StartContainer for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\"" May 8 00:20:16.310468 systemd[1]: Started cri-containerd-375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045.scope - libcontainer container 375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045. May 8 00:20:16.312728 systemd[1]: Started cri-containerd-d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252.scope - libcontainer container d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252. May 8 00:20:16.348137 containerd[1440]: time="2025-05-08T00:20:16.348084684Z" level=info msg="StartContainer for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" returns successfully" May 8 00:20:16.349143 containerd[1440]: time="2025-05-08T00:20:16.348109405Z" level=info msg="StartContainer for \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\" returns successfully" May 8 00:20:16.362472 systemd[1]: cri-containerd-375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045.scope: Deactivated successfully. May 8 00:20:16.418718 containerd[1440]: time="2025-05-08T00:20:16.418637582Z" level=info msg="shim disconnected" id=375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045 namespace=k8s.io May 8 00:20:16.418718 containerd[1440]: time="2025-05-08T00:20:16.418711904Z" level=warning msg="cleaning up after shim disconnected" id=375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045 namespace=k8s.io May 8 00:20:16.418718 containerd[1440]: time="2025-05-08T00:20:16.418722224Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:20:17.256989 kubelet[2473]: E0508 00:20:17.256954 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:17.260630 kubelet[2473]: E0508 00:20:17.260604 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:17.270640 containerd[1440]: time="2025-05-08T00:20:17.270570637Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:20:17.288273 containerd[1440]: time="2025-05-08T00:20:17.288214844Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\"" May 8 00:20:17.289847 containerd[1440]: time="2025-05-08T00:20:17.288820623Z" level=info msg="StartContainer for \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\"" May 8 00:20:17.320458 systemd[1]: Started cri-containerd-63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785.scope - libcontainer container 63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785. May 8 00:20:17.339421 systemd[1]: cri-containerd-63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785.scope: Deactivated successfully. May 8 00:20:17.343431 containerd[1440]: time="2025-05-08T00:20:17.343359215Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62799f9d_162f_4b84_8843_4677bf722d37.slice/cri-containerd-63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785.scope/memory.events\": no such file or directory" May 8 00:20:17.344083 containerd[1440]: time="2025-05-08T00:20:17.343890272Z" level=info msg="StartContainer for \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\" returns successfully" May 8 00:20:17.361513 containerd[1440]: time="2025-05-08T00:20:17.361443476Z" level=info msg="shim disconnected" id=63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785 namespace=k8s.io May 8 00:20:17.361513 containerd[1440]: time="2025-05-08T00:20:17.361497358Z" level=warning msg="cleaning up after shim disconnected" id=63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785 namespace=k8s.io May 8 00:20:17.361513 containerd[1440]: time="2025-05-08T00:20:17.361505158Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:20:17.654923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785-rootfs.mount: Deactivated successfully. May 8 00:20:18.264350 kubelet[2473]: E0508 00:20:18.264312 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:18.265752 kubelet[2473]: E0508 00:20:18.264904 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:18.267327 containerd[1440]: time="2025-05-08T00:20:18.267243163Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:20:18.281164 containerd[1440]: time="2025-05-08T00:20:18.280676088Z" level=info msg="CreateContainer within sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\"" May 8 00:20:18.281336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417679036.mount: Deactivated successfully. May 8 00:20:18.282231 containerd[1440]: time="2025-05-08T00:20:18.282190213Z" level=info msg="StartContainer for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\"" May 8 00:20:18.298530 kubelet[2473]: I0508 00:20:18.298471 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9fchb" podStartSLOduration=3.389612698 podStartE2EDuration="17.298455503s" podCreationTimestamp="2025-05-08 00:20:01 +0000 UTC" firstStartedPulling="2025-05-08 00:20:02.358764001 +0000 UTC m=+6.299238617" lastFinishedPulling="2025-05-08 00:20:16.267606806 +0000 UTC m=+20.208081422" observedRunningTime="2025-05-08 00:20:17.2849813 +0000 UTC m=+21.225455916" watchObservedRunningTime="2025-05-08 00:20:18.298455503 +0000 UTC m=+22.238930079" May 8 00:20:18.313533 systemd[1]: Started cri-containerd-99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef.scope - libcontainer container 99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef. May 8 00:20:18.346417 containerd[1440]: time="2025-05-08T00:20:18.346363426Z" level=info msg="StartContainer for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" returns successfully" May 8 00:20:18.547073 kubelet[2473]: I0508 00:20:18.546964 2473 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:20:18.607615 systemd[1]: Created slice kubepods-burstable-pod461b72e7_1810_4c5c_8bb8_3efe92158dd5.slice - libcontainer container kubepods-burstable-pod461b72e7_1810_4c5c_8bb8_3efe92158dd5.slice. May 8 00:20:18.615819 systemd[1]: Created slice kubepods-burstable-pod658ac8f9_c408_44c6_9a5e_05d7336a2fea.slice - libcontainer container kubepods-burstable-pod658ac8f9_c408_44c6_9a5e_05d7336a2fea.slice. May 8 00:20:18.655186 systemd[1]: run-containerd-runc-k8s.io-99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef-runc.9VBAwh.mount: Deactivated successfully. May 8 00:20:18.695346 kubelet[2473]: I0508 00:20:18.695231 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461b72e7-1810-4c5c-8bb8-3efe92158dd5-config-volume\") pod \"coredns-668d6bf9bc-n8f7s\" (UID: \"461b72e7-1810-4c5c-8bb8-3efe92158dd5\") " pod="kube-system/coredns-668d6bf9bc-n8f7s" May 8 00:20:18.695346 kubelet[2473]: I0508 00:20:18.695302 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/658ac8f9-c408-44c6-9a5e-05d7336a2fea-config-volume\") pod \"coredns-668d6bf9bc-gdfql\" (UID: \"658ac8f9-c408-44c6-9a5e-05d7336a2fea\") " pod="kube-system/coredns-668d6bf9bc-gdfql" May 8 00:20:18.695346 kubelet[2473]: I0508 00:20:18.695323 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gqt\" (UniqueName: \"kubernetes.io/projected/461b72e7-1810-4c5c-8bb8-3efe92158dd5-kube-api-access-x4gqt\") pod \"coredns-668d6bf9bc-n8f7s\" (UID: \"461b72e7-1810-4c5c-8bb8-3efe92158dd5\") " pod="kube-system/coredns-668d6bf9bc-n8f7s" May 8 00:20:18.695663 kubelet[2473]: I0508 00:20:18.695622 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7hr\" (UniqueName: \"kubernetes.io/projected/658ac8f9-c408-44c6-9a5e-05d7336a2fea-kube-api-access-wm7hr\") pod \"coredns-668d6bf9bc-gdfql\" (UID: \"658ac8f9-c408-44c6-9a5e-05d7336a2fea\") " pod="kube-system/coredns-668d6bf9bc-gdfql" May 8 00:20:18.912788 kubelet[2473]: E0508 00:20:18.912672 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:18.913778 containerd[1440]: time="2025-05-08T00:20:18.913737995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8f7s,Uid:461b72e7-1810-4c5c-8bb8-3efe92158dd5,Namespace:kube-system,Attempt:0,}" May 8 00:20:18.919513 kubelet[2473]: E0508 00:20:18.919487 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:18.920008 containerd[1440]: time="2025-05-08T00:20:18.919942662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdfql,Uid:658ac8f9-c408-44c6-9a5e-05d7336a2fea,Namespace:kube-system,Attempt:0,}" May 8 00:20:19.273335 kubelet[2473]: E0508 00:20:19.273224 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:19.289207 kubelet[2473]: I0508 00:20:19.289133 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2z4lk" podStartSLOduration=5.832762638 podStartE2EDuration="18.28911252s" podCreationTimestamp="2025-05-08 00:20:01 +0000 UTC" firstStartedPulling="2025-05-08 00:20:02.170775461 +0000 UTC m=+6.111250077" lastFinishedPulling="2025-05-08 00:20:14.627125343 +0000 UTC m=+18.567599959" observedRunningTime="2025-05-08 00:20:19.287736641 +0000 UTC m=+23.228211257" watchObservedRunningTime="2025-05-08 00:20:19.28911252 +0000 UTC m=+23.229587136" May 8 00:20:20.274564 kubelet[2473]: E0508 00:20:20.274518 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:20.530340 systemd-networkd[1370]: cilium_host: Link UP May 8 00:20:20.530490 systemd-networkd[1370]: cilium_net: Link UP May 8 00:20:20.530624 systemd-networkd[1370]: cilium_net: Gained carrier May 8 00:20:20.530760 systemd-networkd[1370]: cilium_host: Gained carrier May 8 00:20:20.614308 systemd-networkd[1370]: cilium_vxlan: Link UP May 8 00:20:20.614315 systemd-networkd[1370]: cilium_vxlan: Gained carrier May 8 00:20:20.824520 systemd-networkd[1370]: cilium_net: Gained IPv6LL May 8 00:20:20.912637 systemd-networkd[1370]: cilium_host: Gained IPv6LL May 8 00:20:20.913313 kernel: NET: Registered PF_ALG protocol family May 8 00:20:21.278763 kubelet[2473]: E0508 00:20:21.278660 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:21.492917 systemd-networkd[1370]: lxc_health: Link UP May 8 00:20:21.500756 systemd-networkd[1370]: lxc_health: Gained carrier May 8 00:20:22.056912 systemd-networkd[1370]: lxc44e53240bb19: Link UP May 8 00:20:22.066302 kernel: eth0: renamed from tmp7d066 May 8 00:20:22.074628 systemd-networkd[1370]: lxc26d605c8eb4f: Link UP May 8 00:20:22.085363 kernel: eth0: renamed from tmp37031 May 8 00:20:22.098687 systemd-networkd[1370]: lxc44e53240bb19: Gained carrier May 8 00:20:22.099697 systemd-networkd[1370]: lxc26d605c8eb4f: Gained carrier May 8 00:20:22.283428 kubelet[2473]: E0508 00:20:22.283402 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:22.432724 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL May 8 00:20:22.624683 systemd-networkd[1370]: lxc_health: Gained IPv6LL May 8 00:20:23.284004 kubelet[2473]: E0508 00:20:23.283912 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:23.584694 systemd-networkd[1370]: lxc26d605c8eb4f: Gained IPv6LL May 8 00:20:24.032735 systemd-networkd[1370]: lxc44e53240bb19: Gained IPv6LL May 8 00:20:24.139644 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:55368.service - OpenSSH per-connection server daemon (10.0.0.1:55368). May 8 00:20:24.172886 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 55368 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:24.174182 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:24.178019 systemd-logind[1419]: New session 8 of user core. May 8 00:20:24.184411 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:20:24.289833 kubelet[2473]: E0508 00:20:24.289725 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:24.321348 sshd[3714]: pam_unix(sshd:session): session closed for user core May 8 00:20:24.325153 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. May 8 00:20:24.325485 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:55368.service: Deactivated successfully. May 8 00:20:24.328895 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:20:24.331373 systemd-logind[1419]: Removed session 8. May 8 00:20:25.710197 containerd[1440]: time="2025-05-08T00:20:25.710059392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:25.710782 containerd[1440]: time="2025-05-08T00:20:25.710580162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:25.710782 containerd[1440]: time="2025-05-08T00:20:25.710603603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:25.710782 containerd[1440]: time="2025-05-08T00:20:25.710755846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:25.719917 containerd[1440]: time="2025-05-08T00:20:25.719691297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:25.719917 containerd[1440]: time="2025-05-08T00:20:25.719754578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:25.720551 containerd[1440]: time="2025-05-08T00:20:25.720447312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:25.720653 containerd[1440]: time="2025-05-08T00:20:25.720577074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:25.732044 systemd[1]: run-containerd-runc-k8s.io-7d066067cb41d4f88028a43f525712834f1e5767368169367a1929530fe4f18e-runc.yQV8yW.mount: Deactivated successfully. May 8 00:20:25.737479 systemd[1]: Started cri-containerd-7d066067cb41d4f88028a43f525712834f1e5767368169367a1929530fe4f18e.scope - libcontainer container 7d066067cb41d4f88028a43f525712834f1e5767368169367a1929530fe4f18e. May 8 00:20:25.741691 systemd[1]: Started cri-containerd-370315ca4e3b7d240412095fb622e645102d422d30efa8b7f3cc63eec2e8db2b.scope - libcontainer container 370315ca4e3b7d240412095fb622e645102d422d30efa8b7f3cc63eec2e8db2b. May 8 00:20:25.748954 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:20:25.755331 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:20:25.768811 containerd[1440]: time="2025-05-08T00:20:25.768765358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdfql,Uid:658ac8f9-c408-44c6-9a5e-05d7336a2fea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d066067cb41d4f88028a43f525712834f1e5767368169367a1929530fe4f18e\"" May 8 00:20:25.769670 kubelet[2473]: E0508 00:20:25.769645 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:25.774046 containerd[1440]: time="2025-05-08T00:20:25.773904376Z" level=info msg="CreateContainer within sandbox \"7d066067cb41d4f88028a43f525712834f1e5767368169367a1929530fe4f18e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:20:25.777139 containerd[1440]: time="2025-05-08T00:20:25.776554267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8f7s,Uid:461b72e7-1810-4c5c-8bb8-3efe92158dd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"370315ca4e3b7d240412095fb622e645102d422d30efa8b7f3cc63eec2e8db2b\"" May 8 00:20:25.778586 kubelet[2473]: E0508 00:20:25.778477 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:25.783196 containerd[1440]: time="2025-05-08T00:20:25.782826667Z" level=info msg="CreateContainer within sandbox \"370315ca4e3b7d240412095fb622e645102d422d30efa8b7f3cc63eec2e8db2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:20:25.789957 containerd[1440]: time="2025-05-08T00:20:25.789911803Z" level=info msg="CreateContainer within sandbox \"7d066067cb41d4f88028a43f525712834f1e5767368169367a1929530fe4f18e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ca0ecdc5a2d27822999674d82594e61e8d98e4fab7ea364fd80637ffa25cc38\"" May 8 00:20:25.790537 containerd[1440]: time="2025-05-08T00:20:25.790483334Z" level=info msg="StartContainer for \"9ca0ecdc5a2d27822999674d82594e61e8d98e4fab7ea364fd80637ffa25cc38\"" May 8 00:20:25.797391 containerd[1440]: time="2025-05-08T00:20:25.797349586Z" level=info msg="CreateContainer within sandbox \"370315ca4e3b7d240412095fb622e645102d422d30efa8b7f3cc63eec2e8db2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f8ab16a8cfa9319f9f1976ef453674e9eac541114b0df1c28afb567d51db504\"" May 8 00:20:25.797994 containerd[1440]: time="2025-05-08T00:20:25.797959878Z" level=info msg="StartContainer for \"9f8ab16a8cfa9319f9f1976ef453674e9eac541114b0df1c28afb567d51db504\"" May 8 00:20:25.824469 systemd[1]: Started cri-containerd-9ca0ecdc5a2d27822999674d82594e61e8d98e4fab7ea364fd80637ffa25cc38.scope - libcontainer container 9ca0ecdc5a2d27822999674d82594e61e8d98e4fab7ea364fd80637ffa25cc38. May 8 00:20:25.828098 systemd[1]: Started cri-containerd-9f8ab16a8cfa9319f9f1976ef453674e9eac541114b0df1c28afb567d51db504.scope - libcontainer container 9f8ab16a8cfa9319f9f1976ef453674e9eac541114b0df1c28afb567d51db504. May 8 00:20:25.852310 containerd[1440]: time="2025-05-08T00:20:25.852251598Z" level=info msg="StartContainer for \"9ca0ecdc5a2d27822999674d82594e61e8d98e4fab7ea364fd80637ffa25cc38\" returns successfully" May 8 00:20:25.875408 containerd[1440]: time="2025-05-08T00:20:25.875330401Z" level=info msg="StartContainer for \"9f8ab16a8cfa9319f9f1976ef453674e9eac541114b0df1c28afb567d51db504\" returns successfully" May 8 00:20:26.293744 kubelet[2473]: E0508 00:20:26.293703 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:26.296296 kubelet[2473]: E0508 00:20:26.296254 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:26.308992 kubelet[2473]: I0508 00:20:26.308628 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gdfql" podStartSLOduration=25.30861198 podStartE2EDuration="25.30861198s" podCreationTimestamp="2025-05-08 00:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:26.30696703 +0000 UTC m=+30.247441646" watchObservedRunningTime="2025-05-08 00:20:26.30861198 +0000 UTC m=+30.249086596" May 8 00:20:26.318249 kubelet[2473]: I0508 00:20:26.317319 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n8f7s" podStartSLOduration=25.317300816 podStartE2EDuration="25.317300816s" podCreationTimestamp="2025-05-08 00:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:26.316604603 +0000 UTC m=+30.257079259" watchObservedRunningTime="2025-05-08 00:20:26.317300816 +0000 UTC m=+30.257775472" May 8 00:20:27.297406 kubelet[2473]: E0508 00:20:27.297380 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:27.297406 kubelet[2473]: E0508 00:20:27.297407 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:28.298773 kubelet[2473]: E0508 00:20:28.298593 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:28.298773 kubelet[2473]: E0508 00:20:28.298700 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:20:29.334877 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:55378.service - OpenSSH per-connection server daemon (10.0.0.1:55378). May 8 00:20:29.372319 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 55378 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:29.373626 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:29.377379 systemd-logind[1419]: New session 9 of user core. May 8 00:20:29.384449 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:20:29.499024 sshd[3911]: pam_unix(sshd:session): session closed for user core May 8 00:20:29.502230 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:55378.service: Deactivated successfully. May 8 00:20:29.503957 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:20:29.504613 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. May 8 00:20:29.505399 systemd-logind[1419]: Removed session 9. May 8 00:20:34.509905 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:45294.service - OpenSSH per-connection server daemon (10.0.0.1:45294). May 8 00:20:34.546682 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 45294 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:34.547971 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:34.551523 systemd-logind[1419]: New session 10 of user core. May 8 00:20:34.562471 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:20:34.675816 sshd[3928]: pam_unix(sshd:session): session closed for user core May 8 00:20:34.678998 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:45294.service: Deactivated successfully. May 8 00:20:34.680844 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:20:34.682187 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. May 8 00:20:34.683134 systemd-logind[1419]: Removed session 10. May 8 00:20:39.691821 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:45306.service - OpenSSH per-connection server daemon (10.0.0.1:45306). May 8 00:20:39.739336 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 45306 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:39.740740 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:39.749039 systemd-logind[1419]: New session 11 of user core. May 8 00:20:39.758967 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:20:39.889445 sshd[3943]: pam_unix(sshd:session): session closed for user core May 8 00:20:39.902846 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:45306.service: Deactivated successfully. May 8 00:20:39.904348 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:20:39.905904 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. May 8 00:20:39.907151 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:45310.service - OpenSSH per-connection server daemon (10.0.0.1:45310). May 8 00:20:39.909013 systemd-logind[1419]: Removed session 11. May 8 00:20:39.959943 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 45310 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:39.961330 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:39.965687 systemd-logind[1419]: New session 12 of user core. May 8 00:20:39.975444 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:20:40.120541 sshd[3960]: pam_unix(sshd:session): session closed for user core May 8 00:20:40.135104 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:45310.service: Deactivated successfully. May 8 00:20:40.140026 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:20:40.142587 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. May 8 00:20:40.151609 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:45324.service - OpenSSH per-connection server daemon (10.0.0.1:45324). May 8 00:20:40.153577 systemd-logind[1419]: Removed session 12. May 8 00:20:40.192206 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 45324 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:40.193875 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:40.199987 systemd-logind[1419]: New session 13 of user core. May 8 00:20:40.206477 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:20:40.332478 sshd[3972]: pam_unix(sshd:session): session closed for user core May 8 00:20:40.336577 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:45324.service: Deactivated successfully. May 8 00:20:40.338787 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:20:40.340296 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. May 8 00:20:40.341740 systemd-logind[1419]: Removed session 13. May 8 00:20:45.344191 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). May 8 00:20:45.385502 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:45.386670 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:45.390559 systemd-logind[1419]: New session 14 of user core. May 8 00:20:45.398442 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:20:45.518492 sshd[3987]: pam_unix(sshd:session): session closed for user core May 8 00:20:45.524028 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:58526.service: Deactivated successfully. May 8 00:20:45.525710 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:20:45.526475 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. May 8 00:20:45.527591 systemd-logind[1419]: Removed session 14. May 8 00:20:50.534596 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:58534.service - OpenSSH per-connection server daemon (10.0.0.1:58534). May 8 00:20:50.586172 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 58534 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:50.587668 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:50.591695 systemd-logind[1419]: New session 15 of user core. May 8 00:20:50.604459 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:20:50.733389 sshd[4002]: pam_unix(sshd:session): session closed for user core May 8 00:20:50.745016 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:58534.service: Deactivated successfully. May 8 00:20:50.747011 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:20:50.748749 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. May 8 00:20:50.750601 systemd-logind[1419]: Removed session 15. May 8 00:20:50.752626 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:58544.service - OpenSSH per-connection server daemon (10.0.0.1:58544). May 8 00:20:50.796687 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 58544 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:50.797938 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:50.801629 systemd-logind[1419]: New session 16 of user core. May 8 00:20:50.818225 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:20:51.031726 sshd[4018]: pam_unix(sshd:session): session closed for user core May 8 00:20:51.041728 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:58544.service: Deactivated successfully. May 8 00:20:51.043370 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:20:51.044751 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. May 8 00:20:51.049558 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:58556.service - OpenSSH per-connection server daemon (10.0.0.1:58556). May 8 00:20:51.050528 systemd-logind[1419]: Removed session 16. May 8 00:20:51.092646 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 58556 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:51.094270 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:51.099346 systemd-logind[1419]: New session 17 of user core. May 8 00:20:51.111517 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:20:51.811219 sshd[4031]: pam_unix(sshd:session): session closed for user core May 8 00:20:51.821173 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:58556.service: Deactivated successfully. May 8 00:20:51.823346 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:20:51.824997 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. May 8 00:20:51.833020 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:58570.service - OpenSSH per-connection server daemon (10.0.0.1:58570). May 8 00:20:51.834296 systemd-logind[1419]: Removed session 17. May 8 00:20:51.866747 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 58570 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:51.869575 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:51.875555 systemd-logind[1419]: New session 18 of user core. May 8 00:20:51.881023 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:20:52.102946 sshd[4052]: pam_unix(sshd:session): session closed for user core May 8 00:20:52.111835 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:58570.service: Deactivated successfully. May 8 00:20:52.113563 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:20:52.115685 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. May 8 00:20:52.122680 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:58578.service - OpenSSH per-connection server daemon (10.0.0.1:58578). May 8 00:20:52.123898 systemd-logind[1419]: Removed session 18. May 8 00:20:52.154234 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 58578 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:52.155533 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:52.159520 systemd-logind[1419]: New session 19 of user core. May 8 00:20:52.171431 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:20:52.283217 sshd[4064]: pam_unix(sshd:session): session closed for user core May 8 00:20:52.286974 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:58578.service: Deactivated successfully. May 8 00:20:52.289072 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:20:52.289785 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. May 8 00:20:52.290872 systemd-logind[1419]: Removed session 19. May 8 00:20:57.294299 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:34694.service - OpenSSH per-connection server daemon (10.0.0.1:34694). May 8 00:20:57.330080 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 34694 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:20:57.331575 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:57.336914 systemd-logind[1419]: New session 20 of user core. May 8 00:20:57.347483 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:20:57.455450 sshd[4083]: pam_unix(sshd:session): session closed for user core May 8 00:20:57.458344 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:34694.service: Deactivated successfully. May 8 00:20:57.460176 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:20:57.463637 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. May 8 00:20:57.464459 systemd-logind[1419]: Removed session 20. May 8 00:21:02.466803 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:60190.service - OpenSSH per-connection server daemon (10.0.0.1:60190). May 8 00:21:02.501111 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 60190 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:02.502273 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:02.506356 systemd-logind[1419]: New session 21 of user core. May 8 00:21:02.520454 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:21:02.625519 sshd[4099]: pam_unix(sshd:session): session closed for user core May 8 00:21:02.629113 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:60190.service: Deactivated successfully. May 8 00:21:02.630895 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:21:02.631612 systemd-logind[1419]: Session 21 logged out. Waiting for processes to exit. May 8 00:21:02.632415 systemd-logind[1419]: Removed session 21. May 8 00:21:07.636863 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:60202.service - OpenSSH per-connection server daemon (10.0.0.1:60202). May 8 00:21:07.671484 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 60202 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:07.672709 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:07.675849 systemd-logind[1419]: New session 22 of user core. May 8 00:21:07.685408 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:21:07.789138 sshd[4113]: pam_unix(sshd:session): session closed for user core May 8 00:21:07.801768 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:60202.service: Deactivated successfully. May 8 00:21:07.804673 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:21:07.806316 systemd-logind[1419]: Session 22 logged out. Waiting for processes to exit. May 8 00:21:07.818026 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:60204.service - OpenSSH per-connection server daemon (10.0.0.1:60204). May 8 00:21:07.818847 systemd-logind[1419]: Removed session 22. May 8 00:21:07.848079 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 60204 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:07.849159 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:07.852465 systemd-logind[1419]: New session 23 of user core. May 8 00:21:07.867407 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:21:09.704742 containerd[1440]: time="2025-05-08T00:21:09.704627590Z" level=info msg="StopContainer for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" with timeout 30 (s)" May 8 00:21:09.706539 containerd[1440]: time="2025-05-08T00:21:09.706438165Z" level=info msg="Stop container \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" with signal terminated" May 8 00:21:09.718397 systemd[1]: cri-containerd-d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252.scope: Deactivated successfully. May 8 00:21:09.742033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252-rootfs.mount: Deactivated successfully. May 8 00:21:09.744580 containerd[1440]: time="2025-05-08T00:21:09.744545438Z" level=info msg="StopContainer for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" with timeout 2 (s)" May 8 00:21:09.745119 containerd[1440]: time="2025-05-08T00:21:09.745096762Z" level=info msg="Stop container \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" with signal terminated" May 8 00:21:09.747422 containerd[1440]: time="2025-05-08T00:21:09.747373741Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:21:09.748605 containerd[1440]: time="2025-05-08T00:21:09.748561391Z" level=info msg="shim disconnected" id=d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252 namespace=k8s.io May 8 00:21:09.748605 containerd[1440]: time="2025-05-08T00:21:09.748603111Z" level=warning msg="cleaning up after shim disconnected" id=d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252 namespace=k8s.io May 8 00:21:09.748694 containerd[1440]: time="2025-05-08T00:21:09.748611231Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:09.750727 systemd-networkd[1370]: lxc_health: Link DOWN May 8 00:21:09.750733 systemd-networkd[1370]: lxc_health: Lost carrier May 8 00:21:09.772098 systemd[1]: cri-containerd-99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef.scope: Deactivated successfully. May 8 00:21:09.772529 systemd[1]: cri-containerd-99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef.scope: Consumed 6.557s CPU time. May 8 00:21:09.790112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef-rootfs.mount: Deactivated successfully. May 8 00:21:09.792968 containerd[1440]: time="2025-05-08T00:21:09.792868834Z" level=info msg="StopContainer for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" returns successfully" May 8 00:21:09.793726 containerd[1440]: time="2025-05-08T00:21:09.793677721Z" level=info msg="StopPodSandbox for \"d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295\"" May 8 00:21:09.793835 containerd[1440]: time="2025-05-08T00:21:09.793738201Z" level=info msg="Container to stop \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:21:09.795247 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295-shm.mount: Deactivated successfully. May 8 00:21:09.796654 containerd[1440]: time="2025-05-08T00:21:09.796615585Z" level=info msg="shim disconnected" id=99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef namespace=k8s.io May 8 00:21:09.797116 containerd[1440]: time="2025-05-08T00:21:09.797093789Z" level=warning msg="cleaning up after shim disconnected" id=99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef namespace=k8s.io May 8 00:21:09.797213 containerd[1440]: time="2025-05-08T00:21:09.797198710Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:09.803731 systemd[1]: cri-containerd-d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295.scope: Deactivated successfully. May 8 00:21:09.812711 containerd[1440]: time="2025-05-08T00:21:09.812621836Z" level=info msg="StopContainer for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" returns successfully" May 8 00:21:09.814220 containerd[1440]: time="2025-05-08T00:21:09.814132168Z" level=info msg="StopPodSandbox for \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\"" May 8 00:21:09.814220 containerd[1440]: time="2025-05-08T00:21:09.814210409Z" level=info msg="Container to stop \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:21:09.814220 containerd[1440]: time="2025-05-08T00:21:09.814223169Z" level=info msg="Container to stop \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:21:09.814364 containerd[1440]: time="2025-05-08T00:21:09.814232769Z" level=info msg="Container to stop \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:21:09.814364 containerd[1440]: time="2025-05-08T00:21:09.814242489Z" level=info msg="Container to stop \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:21:09.814364 containerd[1440]: time="2025-05-08T00:21:09.814251089Z" level=info msg="Container to stop \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:21:09.816034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524-shm.mount: Deactivated successfully. May 8 00:21:09.819837 systemd[1]: cri-containerd-01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524.scope: Deactivated successfully. May 8 00:21:09.827598 containerd[1440]: time="2025-05-08T00:21:09.827546958Z" level=info msg="shim disconnected" id=d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295 namespace=k8s.io May 8 00:21:09.827792 containerd[1440]: time="2025-05-08T00:21:09.827595999Z" level=warning msg="cleaning up after shim disconnected" id=d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295 namespace=k8s.io May 8 00:21:09.827792 containerd[1440]: time="2025-05-08T00:21:09.827613239Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:09.838534 containerd[1440]: time="2025-05-08T00:21:09.838491488Z" level=info msg="TearDown network for sandbox \"d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295\" successfully" May 8 00:21:09.838534 containerd[1440]: time="2025-05-08T00:21:09.838528529Z" level=info msg="StopPodSandbox for \"d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295\" returns successfully" May 8 00:21:09.839109 containerd[1440]: time="2025-05-08T00:21:09.839066933Z" level=info msg="shim disconnected" id=01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524 namespace=k8s.io May 8 00:21:09.839167 containerd[1440]: time="2025-05-08T00:21:09.839121973Z" level=warning msg="cleaning up after shim disconnected" id=01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524 namespace=k8s.io May 8 00:21:09.839167 containerd[1440]: time="2025-05-08T00:21:09.839131733Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:09.858092 containerd[1440]: time="2025-05-08T00:21:09.858044569Z" level=info msg="TearDown network for sandbox \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" successfully" May 8 00:21:09.858092 containerd[1440]: time="2025-05-08T00:21:09.858082249Z" level=info msg="StopPodSandbox for \"01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524\" returns successfully" May 8 00:21:09.860899 kubelet[2473]: I0508 00:21:09.860864 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfsq\" (UniqueName: \"kubernetes.io/projected/9b022244-201d-4461-9622-e9cadb32e96f-kube-api-access-2kfsq\") pod \"9b022244-201d-4461-9622-e9cadb32e96f\" (UID: \"9b022244-201d-4461-9622-e9cadb32e96f\") " May 8 00:21:09.861177 kubelet[2473]: I0508 00:21:09.860906 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b022244-201d-4461-9622-e9cadb32e96f-cilium-config-path\") pod \"9b022244-201d-4461-9622-e9cadb32e96f\" (UID: \"9b022244-201d-4461-9622-e9cadb32e96f\") " May 8 00:21:09.862882 kubelet[2473]: I0508 00:21:09.862850 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b022244-201d-4461-9622-e9cadb32e96f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b022244-201d-4461-9622-e9cadb32e96f" (UID: "9b022244-201d-4461-9622-e9cadb32e96f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:21:09.864021 kubelet[2473]: I0508 00:21:09.863488 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b022244-201d-4461-9622-e9cadb32e96f-kube-api-access-2kfsq" (OuterVolumeSpecName: "kube-api-access-2kfsq") pod "9b022244-201d-4461-9622-e9cadb32e96f" (UID: "9b022244-201d-4461-9622-e9cadb32e96f"). InnerVolumeSpecName "kube-api-access-2kfsq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:21:09.961594 kubelet[2473]: I0508 00:21:09.961469 2473 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2kfsq\" (UniqueName: \"kubernetes.io/projected/9b022244-201d-4461-9622-e9cadb32e96f-kube-api-access-2kfsq\") on node \"localhost\" DevicePath \"\"" May 8 00:21:09.961594 kubelet[2473]: I0508 00:21:09.961525 2473 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b022244-201d-4461-9622-e9cadb32e96f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.061895 kubelet[2473]: I0508 00:21:10.061844 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-run\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.061895 kubelet[2473]: I0508 00:21:10.061901 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rlp\" (UniqueName: \"kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-kube-api-access-79rlp\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062050 kubelet[2473]: I0508 00:21:10.061925 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-lib-modules\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062050 kubelet[2473]: I0508 00:21:10.061943 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-net\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062050 kubelet[2473]: I0508 00:21:10.061958 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-cgroup\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062050 kubelet[2473]: I0508 00:21:10.061973 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-hostproc\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062050 kubelet[2473]: I0508 00:21:10.061969 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.062050 kubelet[2473]: I0508 00:21:10.061988 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-etc-cni-netd\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062198 kubelet[2473]: I0508 00:21:10.062012 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.062198 kubelet[2473]: I0508 00:21:10.062036 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.062198 kubelet[2473]: I0508 00:21:10.062039 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-kernel\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062198 kubelet[2473]: I0508 00:21:10.062051 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.062198 kubelet[2473]: I0508 00:21:10.062058 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-bpf-maps\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062329 kubelet[2473]: I0508 00:21:10.062065 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.062329 kubelet[2473]: I0508 00:21:10.062075 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-xtables-lock\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062329 kubelet[2473]: I0508 00:21:10.062080 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-hostproc" (OuterVolumeSpecName: "hostproc") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.062329 kubelet[2473]: I0508 00:21:10.062095 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62799f9d-162f-4b84-8843-4677bf722d37-cilium-config-path\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062329 kubelet[2473]: I0508 00:21:10.062113 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-hubble-tls\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062329 kubelet[2473]: I0508 00:21:10.062131 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62799f9d-162f-4b84-8843-4677bf722d37-clustermesh-secrets\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062145 2473 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cni-path\") pod \"62799f9d-162f-4b84-8843-4677bf722d37\" (UID: \"62799f9d-162f-4b84-8843-4677bf722d37\") " May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062193 2473 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062213 2473 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062223 2473 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062232 2473 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062241 2473 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.062459 kubelet[2473]: I0508 00:21:10.062248 2473 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.064090 kubelet[2473]: I0508 00:21:10.062095 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.064090 kubelet[2473]: I0508 00:21:10.062107 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.064090 kubelet[2473]: I0508 00:21:10.062267 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cni-path" (OuterVolumeSpecName: "cni-path") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.064090 kubelet[2473]: I0508 00:21:10.063970 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62799f9d-162f-4b84-8843-4677bf722d37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:21:10.064090 kubelet[2473]: I0508 00:21:10.064000 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:21:10.065622 kubelet[2473]: I0508 00:21:10.065574 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-kube-api-access-79rlp" (OuterVolumeSpecName: "kube-api-access-79rlp") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "kube-api-access-79rlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:21:10.065622 kubelet[2473]: I0508 00:21:10.065600 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:21:10.067596 kubelet[2473]: I0508 00:21:10.067563 2473 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62799f9d-162f-4b84-8843-4677bf722d37-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "62799f9d-162f-4b84-8843-4677bf722d37" (UID: "62799f9d-162f-4b84-8843-4677bf722d37"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:21:10.155763 systemd[1]: Removed slice kubepods-burstable-pod62799f9d_162f_4b84_8843_4677bf722d37.slice - libcontainer container kubepods-burstable-pod62799f9d_162f_4b84_8843_4677bf722d37.slice. May 8 00:21:10.155972 systemd[1]: kubepods-burstable-pod62799f9d_162f_4b84_8843_4677bf722d37.slice: Consumed 6.701s CPU time. May 8 00:21:10.157417 systemd[1]: Removed slice kubepods-besteffort-pod9b022244_201d_4461_9622_e9cadb32e96f.slice - libcontainer container kubepods-besteffort-pod9b022244_201d_4461_9622_e9cadb32e96f.slice. May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162481 2473 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79rlp\" (UniqueName: \"kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-kube-api-access-79rlp\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162507 2473 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162516 2473 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62799f9d-162f-4b84-8843-4677bf722d37-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162524 2473 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62799f9d-162f-4b84-8843-4677bf722d37-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162533 2473 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162542 2473 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162550 2473 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62799f9d-162f-4b84-8843-4677bf722d37-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.162575 kubelet[2473]: I0508 00:21:10.162558 2473 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62799f9d-162f-4b84-8843-4677bf722d37-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:21:10.393320 kubelet[2473]: I0508 00:21:10.393225 2473 scope.go:117] "RemoveContainer" containerID="99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef" May 8 00:21:10.394931 containerd[1440]: time="2025-05-08T00:21:10.394746665Z" level=info msg="RemoveContainer for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\"" May 8 00:21:10.401488 containerd[1440]: time="2025-05-08T00:21:10.401413999Z" level=info msg="RemoveContainer for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" returns successfully" May 8 00:21:10.401766 kubelet[2473]: I0508 00:21:10.401735 2473 scope.go:117] "RemoveContainer" containerID="63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785" May 8 00:21:10.403092 containerd[1440]: time="2025-05-08T00:21:10.403058932Z" level=info msg="RemoveContainer for \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\"" May 8 00:21:10.405810 containerd[1440]: time="2025-05-08T00:21:10.405768314Z" level=info msg="RemoveContainer for \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\" returns successfully" May 8 00:21:10.406111 kubelet[2473]: I0508 00:21:10.406091 2473 scope.go:117] "RemoveContainer" containerID="375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045" May 8 00:21:10.408702 containerd[1440]: time="2025-05-08T00:21:10.408590897Z" level=info msg="RemoveContainer for \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\"" May 8 00:21:10.411593 containerd[1440]: time="2025-05-08T00:21:10.411558080Z" level=info msg="RemoveContainer for \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\" returns successfully" May 8 00:21:10.411767 kubelet[2473]: I0508 00:21:10.411750 2473 scope.go:117] "RemoveContainer" containerID="37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28" May 8 00:21:10.413038 containerd[1440]: time="2025-05-08T00:21:10.412819811Z" level=info msg="RemoveContainer for \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\"" May 8 00:21:10.415911 containerd[1440]: time="2025-05-08T00:21:10.415880755Z" level=info msg="RemoveContainer for \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\" returns successfully" May 8 00:21:10.416082 kubelet[2473]: I0508 00:21:10.416050 2473 scope.go:117] "RemoveContainer" containerID="20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e" May 8 00:21:10.417781 containerd[1440]: time="2025-05-08T00:21:10.417732770Z" level=info msg="RemoveContainer for \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\"" May 8 00:21:10.420723 containerd[1440]: time="2025-05-08T00:21:10.420691074Z" level=info msg="RemoveContainer for \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\" returns successfully" May 8 00:21:10.420872 kubelet[2473]: I0508 00:21:10.420835 2473 scope.go:117] "RemoveContainer" containerID="99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef" May 8 00:21:10.421058 containerd[1440]: time="2025-05-08T00:21:10.421019716Z" level=error msg="ContainerStatus for \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\": not found" May 8 00:21:10.426988 kubelet[2473]: E0508 00:21:10.426950 2473 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\": not found" containerID="99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef" May 8 00:21:10.427063 kubelet[2473]: I0508 00:21:10.426993 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef"} err="failed to get container status \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\": rpc error: code = NotFound desc = an error occurred when try to find container \"99aceaab14533b785c9eb8295dcfab7b14efffc932b826577c66516695bdbdef\": not found" May 8 00:21:10.427063 kubelet[2473]: I0508 00:21:10.427062 2473 scope.go:117] "RemoveContainer" containerID="63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785" May 8 00:21:10.427274 containerd[1440]: time="2025-05-08T00:21:10.427239446Z" level=error msg="ContainerStatus for \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\": not found" May 8 00:21:10.427409 kubelet[2473]: E0508 00:21:10.427378 2473 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\": not found" containerID="63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785" May 8 00:21:10.427409 kubelet[2473]: I0508 00:21:10.427403 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785"} err="failed to get container status \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\": rpc error: code = NotFound desc = an error occurred when try to find container \"63b0ecde5eca5a111da36ff65ebe4bc0cc543f83e83f5b7e90081ed6ab421785\": not found" May 8 00:21:10.427470 kubelet[2473]: I0508 00:21:10.427417 2473 scope.go:117] "RemoveContainer" containerID="375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045" May 8 00:21:10.427621 containerd[1440]: time="2025-05-08T00:21:10.427579849Z" level=error msg="ContainerStatus for \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\": not found" May 8 00:21:10.427741 kubelet[2473]: E0508 00:21:10.427719 2473 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\": not found" containerID="375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045" May 8 00:21:10.427786 kubelet[2473]: I0508 00:21:10.427747 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045"} err="failed to get container status \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\": rpc error: code = NotFound desc = an error occurred when try to find container \"375dc4b23e54ce1f790d35b180fcfd1e54436bcc89fbddb566e658852e04d045\": not found" May 8 00:21:10.427786 kubelet[2473]: I0508 00:21:10.427766 2473 scope.go:117] "RemoveContainer" containerID="37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28" May 8 00:21:10.427936 containerd[1440]: time="2025-05-08T00:21:10.427911212Z" level=error msg="ContainerStatus for \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\": not found" May 8 00:21:10.428047 kubelet[2473]: E0508 00:21:10.428029 2473 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\": not found" containerID="37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28" May 8 00:21:10.428082 kubelet[2473]: I0508 00:21:10.428050 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28"} err="failed to get container status \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\": rpc error: code = NotFound desc = an error occurred when try to find container \"37637514195680fe0cc84f80e1581f46842e904a0b8a330cae578889bb845f28\": not found" May 8 00:21:10.428082 kubelet[2473]: I0508 00:21:10.428065 2473 scope.go:117] "RemoveContainer" containerID="20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e" May 8 00:21:10.428262 containerd[1440]: time="2025-05-08T00:21:10.428241134Z" level=error msg="ContainerStatus for \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\": not found" May 8 00:21:10.428394 kubelet[2473]: E0508 00:21:10.428372 2473 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\": not found" containerID="20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e" May 8 00:21:10.428440 kubelet[2473]: I0508 00:21:10.428398 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e"} err="failed to get container status \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\": rpc error: code = NotFound desc = an error occurred when try to find container \"20337f7bb5392ff220680f407e4cb5aa8a6aa58a38e2982906221d727336344e\": not found" May 8 00:21:10.428440 kubelet[2473]: I0508 00:21:10.428419 2473 scope.go:117] "RemoveContainer" containerID="d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252" May 8 00:21:10.429415 containerd[1440]: time="2025-05-08T00:21:10.429390504Z" level=info msg="RemoveContainer for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\"" May 8 00:21:10.431551 containerd[1440]: time="2025-05-08T00:21:10.431518841Z" level=info msg="RemoveContainer for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" returns successfully" May 8 00:21:10.431733 kubelet[2473]: I0508 00:21:10.431697 2473 scope.go:117] "RemoveContainer" containerID="d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252" May 8 00:21:10.431922 containerd[1440]: time="2025-05-08T00:21:10.431882124Z" level=error msg="ContainerStatus for \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\": not found" May 8 00:21:10.432023 kubelet[2473]: E0508 00:21:10.432003 2473 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\": not found" containerID="d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252" May 8 00:21:10.432065 kubelet[2473]: I0508 00:21:10.432028 2473 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252"} err="failed to get container status \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\": rpc error: code = NotFound desc = an error occurred when try to find container \"d465baf913c08adeec0f640e224dc4d58573635b97651146614fb96ce1d40252\": not found" May 8 00:21:10.723645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d70c5da7e6db9430b7919e924cd5687e016a83b04d9145abe9435a09b8e1d295-rootfs.mount: Deactivated successfully. May 8 00:21:10.723738 systemd[1]: var-lib-kubelet-pods-9b022244\x2d201d\x2d4461\x2d9622\x2de9cadb32e96f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2kfsq.mount: Deactivated successfully. May 8 00:21:10.723801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01abd93ae85c689b813891619f115e99944586ddee84ec64b2fe6af751a22524-rootfs.mount: Deactivated successfully. May 8 00:21:10.723850 systemd[1]: var-lib-kubelet-pods-62799f9d\x2d162f\x2d4b84\x2d8843\x2d4677bf722d37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79rlp.mount: Deactivated successfully. May 8 00:21:10.723912 systemd[1]: var-lib-kubelet-pods-62799f9d\x2d162f\x2d4b84\x2d8843\x2d4677bf722d37-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:21:10.723957 systemd[1]: var-lib-kubelet-pods-62799f9d\x2d162f\x2d4b84\x2d8843\x2d4677bf722d37-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:21:11.220860 kubelet[2473]: E0508 00:21:11.220814 2473 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:21:11.668775 sshd[4128]: pam_unix(sshd:session): session closed for user core May 8 00:21:11.683842 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:60204.service: Deactivated successfully. May 8 00:21:11.686213 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:21:11.686504 systemd[1]: session-23.scope: Consumed 1.182s CPU time. May 8 00:21:11.687947 systemd-logind[1419]: Session 23 logged out. Waiting for processes to exit. May 8 00:21:11.695828 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:60208.service - OpenSSH per-connection server daemon (10.0.0.1:60208). May 8 00:21:11.697215 systemd-logind[1419]: Removed session 23. May 8 00:21:11.727587 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 60208 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:11.728760 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:11.732630 systemd-logind[1419]: New session 24 of user core. May 8 00:21:11.738427 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:21:12.153181 kubelet[2473]: I0508 00:21:12.152942 2473 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62799f9d-162f-4b84-8843-4677bf722d37" path="/var/lib/kubelet/pods/62799f9d-162f-4b84-8843-4677bf722d37/volumes" May 8 00:21:12.154061 kubelet[2473]: I0508 00:21:12.154028 2473 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b022244-201d-4461-9622-e9cadb32e96f" path="/var/lib/kubelet/pods/9b022244-201d-4461-9622-e9cadb32e96f/volumes" May 8 00:21:12.417638 sshd[4289]: pam_unix(sshd:session): session closed for user core May 8 00:21:12.426185 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:60208.service: Deactivated successfully. May 8 00:21:12.429622 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:21:12.431715 systemd-logind[1419]: Session 24 logged out. Waiting for processes to exit. May 8 00:21:12.434055 kubelet[2473]: I0508 00:21:12.434008 2473 memory_manager.go:355] "RemoveStaleState removing state" podUID="62799f9d-162f-4b84-8843-4677bf722d37" containerName="cilium-agent" May 8 00:21:12.434055 kubelet[2473]: I0508 00:21:12.434038 2473 memory_manager.go:355] "RemoveStaleState removing state" podUID="9b022244-201d-4461-9622-e9cadb32e96f" containerName="cilium-operator" May 8 00:21:12.443747 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). May 8 00:21:12.449511 systemd-logind[1419]: Removed session 24. May 8 00:21:12.455768 systemd[1]: Created slice kubepods-burstable-pod693e4049_5ab5_4f10_8ffe_48fc42a30023.slice - libcontainer container kubepods-burstable-pod693e4049_5ab5_4f10_8ffe_48fc42a30023.slice. May 8 00:21:12.476091 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:12.477468 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:12.484349 systemd-logind[1419]: New session 25 of user core. May 8 00:21:12.498492 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:21:12.551949 sshd[4302]: pam_unix(sshd:session): session closed for user core May 8 00:21:12.566393 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:60218.service: Deactivated successfully. May 8 00:21:12.569423 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:21:12.570868 systemd-logind[1419]: Session 25 logged out. Waiting for processes to exit. May 8 00:21:12.573198 kubelet[2473]: I0508 00:21:12.572826 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-bpf-maps\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573198 kubelet[2473]: I0508 00:21:12.572865 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhmk\" (UniqueName: \"kubernetes.io/projected/693e4049-5ab5-4f10-8ffe-48fc42a30023-kube-api-access-lqhmk\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573198 kubelet[2473]: I0508 00:21:12.572887 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-hostproc\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573198 kubelet[2473]: I0508 00:21:12.572905 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-host-proc-sys-net\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573198 kubelet[2473]: I0508 00:21:12.572941 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-cni-path\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573198 kubelet[2473]: I0508 00:21:12.572957 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-etc-cni-netd\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573490 kubelet[2473]: I0508 00:21:12.572974 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-lib-modules\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573490 kubelet[2473]: I0508 00:21:12.573007 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-cilium-run\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573490 kubelet[2473]: I0508 00:21:12.573024 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-host-proc-sys-kernel\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573490 kubelet[2473]: I0508 00:21:12.573040 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-cilium-cgroup\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573490 kubelet[2473]: I0508 00:21:12.573059 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/693e4049-5ab5-4f10-8ffe-48fc42a30023-clustermesh-secrets\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573490 kubelet[2473]: I0508 00:21:12.573075 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/693e4049-5ab5-4f10-8ffe-48fc42a30023-cilium-ipsec-secrets\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573684 kubelet[2473]: I0508 00:21:12.573089 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/693e4049-5ab5-4f10-8ffe-48fc42a30023-hubble-tls\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573684 kubelet[2473]: I0508 00:21:12.573104 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/693e4049-5ab5-4f10-8ffe-48fc42a30023-cilium-config-path\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.573684 kubelet[2473]: I0508 00:21:12.573121 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/693e4049-5ab5-4f10-8ffe-48fc42a30023-xtables-lock\") pod \"cilium-6sbg2\" (UID: \"693e4049-5ab5-4f10-8ffe-48fc42a30023\") " pod="kube-system/cilium-6sbg2" May 8 00:21:12.584304 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:33302.service - OpenSSH per-connection server daemon (10.0.0.1:33302). May 8 00:21:12.585475 systemd-logind[1419]: Removed session 25. May 8 00:21:12.616092 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 33302 ssh2: RSA SHA256:uqTlxEVIlqO7YszCld6UTwkJvggHOENVL9xK+bdetOE May 8 00:21:12.617466 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:21:12.621186 systemd-logind[1419]: New session 26 of user core. May 8 00:21:12.628412 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:21:12.760575 kubelet[2473]: E0508 00:21:12.760538 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:12.761058 containerd[1440]: time="2025-05-08T00:21:12.761017761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6sbg2,Uid:693e4049-5ab5-4f10-8ffe-48fc42a30023,Namespace:kube-system,Attempt:0,}" May 8 00:21:12.780678 containerd[1440]: time="2025-05-08T00:21:12.780589431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:21:12.780678 containerd[1440]: time="2025-05-08T00:21:12.780642152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:21:12.780678 containerd[1440]: time="2025-05-08T00:21:12.780672312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:21:12.780912 containerd[1440]: time="2025-05-08T00:21:12.780767953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:21:12.798492 systemd[1]: Started cri-containerd-dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543.scope - libcontainer container dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543. May 8 00:21:12.816851 containerd[1440]: time="2025-05-08T00:21:12.816730590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6sbg2,Uid:693e4049-5ab5-4f10-8ffe-48fc42a30023,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\"" May 8 00:21:12.817526 kubelet[2473]: E0508 00:21:12.817418 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:12.819649 containerd[1440]: time="2025-05-08T00:21:12.819526851Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:21:12.837443 containerd[1440]: time="2025-05-08T00:21:12.837396709Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6\"" May 8 00:21:12.838036 containerd[1440]: time="2025-05-08T00:21:12.838012513Z" level=info msg="StartContainer for \"ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6\"" May 8 00:21:12.861467 systemd[1]: Started cri-containerd-ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6.scope - libcontainer container ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6. May 8 00:21:12.883066 containerd[1440]: time="2025-05-08T00:21:12.883020980Z" level=info msg="StartContainer for \"ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6\" returns successfully" May 8 00:21:12.895010 systemd[1]: cri-containerd-ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6.scope: Deactivated successfully. May 8 00:21:12.922320 containerd[1440]: time="2025-05-08T00:21:12.922227641Z" level=info msg="shim disconnected" id=ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6 namespace=k8s.io May 8 00:21:12.922320 containerd[1440]: time="2025-05-08T00:21:12.922309482Z" level=warning msg="cleaning up after shim disconnected" id=ba2bb1c593e082a4c5ebd8e1639349773522fa8fdeb95d629f271efa11384ec6 namespace=k8s.io May 8 00:21:12.922320 containerd[1440]: time="2025-05-08T00:21:12.922322722Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:13.404830 kubelet[2473]: E0508 00:21:13.404759 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:13.407774 containerd[1440]: time="2025-05-08T00:21:13.407643353Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:21:13.420743 containerd[1440]: time="2025-05-08T00:21:13.420693811Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76\"" May 8 00:21:13.422089 containerd[1440]: time="2025-05-08T00:21:13.421263256Z" level=info msg="StartContainer for \"dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76\"" May 8 00:21:13.451508 systemd[1]: Started cri-containerd-dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76.scope - libcontainer container dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76. May 8 00:21:13.475884 containerd[1440]: time="2025-05-08T00:21:13.475813427Z" level=info msg="StartContainer for \"dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76\" returns successfully" May 8 00:21:13.478312 systemd[1]: cri-containerd-dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76.scope: Deactivated successfully. May 8 00:21:13.499674 containerd[1440]: time="2025-05-08T00:21:13.499511245Z" level=info msg="shim disconnected" id=dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76 namespace=k8s.io May 8 00:21:13.499674 containerd[1440]: time="2025-05-08T00:21:13.499561166Z" level=warning msg="cleaning up after shim disconnected" id=dad256d1e0377b545056a8a6887d24707107622cbd90d1e551fb4ac1a3ea3d76 namespace=k8s.io May 8 00:21:13.499674 containerd[1440]: time="2025-05-08T00:21:13.499569126Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:14.408799 kubelet[2473]: E0508 00:21:14.408705 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:14.415021 containerd[1440]: time="2025-05-08T00:21:14.414842242Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:21:14.434395 containerd[1440]: time="2025-05-08T00:21:14.434347426Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc\"" May 8 00:21:14.435242 containerd[1440]: time="2025-05-08T00:21:14.435117952Z" level=info msg="StartContainer for \"14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc\"" May 8 00:21:14.463484 systemd[1]: Started cri-containerd-14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc.scope - libcontainer container 14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc. May 8 00:21:14.489406 systemd[1]: cri-containerd-14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc.scope: Deactivated successfully. May 8 00:21:14.490080 containerd[1440]: time="2025-05-08T00:21:14.489726835Z" level=info msg="StartContainer for \"14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc\" returns successfully" May 8 00:21:14.519934 containerd[1440]: time="2025-05-08T00:21:14.519875458Z" level=info msg="shim disconnected" id=14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc namespace=k8s.io May 8 00:21:14.519934 containerd[1440]: time="2025-05-08T00:21:14.519928178Z" level=warning msg="cleaning up after shim disconnected" id=14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc namespace=k8s.io May 8 00:21:14.519934 containerd[1440]: time="2025-05-08T00:21:14.519936498Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:14.677976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14748a740189c1471942c9071815d1ff60e2bc5f5792f581133c208701b946fc-rootfs.mount: Deactivated successfully. May 8 00:21:15.413200 kubelet[2473]: E0508 00:21:15.412818 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:15.416622 containerd[1440]: time="2025-05-08T00:21:15.416590738Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:21:15.438972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075196211.mount: Deactivated successfully. May 8 00:21:15.440418 containerd[1440]: time="2025-05-08T00:21:15.440375030Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82\"" May 8 00:21:15.440820 containerd[1440]: time="2025-05-08T00:21:15.440799834Z" level=info msg="StartContainer for \"5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82\"" May 8 00:21:15.465456 systemd[1]: Started cri-containerd-5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82.scope - libcontainer container 5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82. May 8 00:21:15.486009 systemd[1]: cri-containerd-5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82.scope: Deactivated successfully. May 8 00:21:15.487123 containerd[1440]: time="2025-05-08T00:21:15.487043368Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod693e4049_5ab5_4f10_8ffe_48fc42a30023.slice/cri-containerd-5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82.scope/memory.events\": no such file or directory" May 8 00:21:15.491747 containerd[1440]: time="2025-05-08T00:21:15.491556241Z" level=info msg="StartContainer for \"5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82\" returns successfully" May 8 00:21:15.510310 containerd[1440]: time="2025-05-08T00:21:15.510235456Z" level=info msg="shim disconnected" id=5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82 namespace=k8s.io May 8 00:21:15.510310 containerd[1440]: time="2025-05-08T00:21:15.510307537Z" level=warning msg="cleaning up after shim disconnected" id=5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82 namespace=k8s.io May 8 00:21:15.510310 containerd[1440]: time="2025-05-08T00:21:15.510317937Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:15.677995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5886888c459b71f99d187c5842fa6db119ae5d73519adc4cec3a50d6cb677b82-rootfs.mount: Deactivated successfully. May 8 00:21:16.222338 kubelet[2473]: E0508 00:21:16.222294 2473 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:21:16.418645 kubelet[2473]: E0508 00:21:16.418569 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:16.424503 containerd[1440]: time="2025-05-08T00:21:16.424451651Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:21:16.447614 containerd[1440]: time="2025-05-08T00:21:16.447519215Z" level=info msg="CreateContainer within sandbox \"dd0af8e29e38895085f2ab087a9b2ea9362a8125349467838b1b716b5d628543\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c1ff4f220c0cfbaf9c8686733766eeb390bda488688d1e52f25f4c8928f09e0\"" May 8 00:21:16.448059 containerd[1440]: time="2025-05-08T00:21:16.448025218Z" level=info msg="StartContainer for \"3c1ff4f220c0cfbaf9c8686733766eeb390bda488688d1e52f25f4c8928f09e0\"" May 8 00:21:16.473816 systemd[1]: Started cri-containerd-3c1ff4f220c0cfbaf9c8686733766eeb390bda488688d1e52f25f4c8928f09e0.scope - libcontainer container 3c1ff4f220c0cfbaf9c8686733766eeb390bda488688d1e52f25f4c8928f09e0. May 8 00:21:16.498571 containerd[1440]: time="2025-05-08T00:21:16.498523136Z" level=info msg="StartContainer for \"3c1ff4f220c0cfbaf9c8686733766eeb390bda488688d1e52f25f4c8928f09e0\" returns successfully" May 8 00:21:16.799324 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 00:21:17.424661 kubelet[2473]: E0508 00:21:17.424359 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:17.449616 kubelet[2473]: I0508 00:21:17.449529 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6sbg2" podStartSLOduration=5.449510339 podStartE2EDuration="5.449510339s" podCreationTimestamp="2025-05-08 00:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:21:17.448999336 +0000 UTC m=+81.389473952" watchObservedRunningTime="2025-05-08 00:21:17.449510339 +0000 UTC m=+81.389984955" May 8 00:21:18.078788 kubelet[2473]: I0508 00:21:18.078446 2473 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:21:18Z","lastTransitionTime":"2025-05-08T00:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:21:18.761195 kubelet[2473]: E0508 00:21:18.761166 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:18.948417 systemd[1]: run-containerd-runc-k8s.io-3c1ff4f220c0cfbaf9c8686733766eeb390bda488688d1e52f25f4c8928f09e0-runc.dXy2Xi.mount: Deactivated successfully. May 8 00:21:19.570549 systemd-networkd[1370]: lxc_health: Link UP May 8 00:21:19.577154 systemd-networkd[1370]: lxc_health: Gained carrier May 8 00:21:20.763043 kubelet[2473]: E0508 00:21:20.762996 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:20.928682 systemd-networkd[1370]: lxc_health: Gained IPv6LL May 8 00:21:21.431565 kubelet[2473]: E0508 00:21:21.431537 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:22.149117 kubelet[2473]: E0508 00:21:22.149088 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:22.434057 kubelet[2473]: E0508 00:21:22.433830 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:21:25.343557 sshd[4310]: pam_unix(sshd:session): session closed for user core May 8 00:21:25.346236 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:33302.service: Deactivated successfully. May 8 00:21:25.348145 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:21:25.349767 systemd-logind[1419]: Session 26 logged out. Waiting for processes to exit. May 8 00:21:25.350893 systemd-logind[1419]: Removed session 26.