May 8 23:55:06.999168 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 23:55:06.999190 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu May 8 22:43:24 -00 2025 May 8 23:55:06.999200 kernel: KASLR enabled May 8 23:55:06.999206 kernel: efi: EFI v2.7 by EDK II May 8 23:55:06.999212 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 8 23:55:06.999218 kernel: random: crng init done May 8 23:55:06.999226 kernel: ACPI: Early table checksum verification disabled May 8 23:55:06.999232 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 8 23:55:06.999238 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 23:55:06.999246 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999252 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999258 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999264 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999270 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999278 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999286 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999292 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999299 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:55:06.999305 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 23:55:06.999312 kernel: NUMA: Failed to initialise from firmware May 8 23:55:06.999318 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:55:06.999324 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 23:55:06.999331 kernel: Zone ranges: May 8 23:55:06.999337 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:55:06.999343 kernel: DMA32 empty May 8 23:55:06.999351 kernel: Normal empty May 8 23:55:06.999358 kernel: Movable zone start for each node May 8 23:55:06.999364 kernel: Early memory node ranges May 8 23:55:06.999371 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 23:55:06.999377 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 23:55:06.999384 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 23:55:06.999390 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 23:55:06.999416 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 23:55:06.999423 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 23:55:06.999429 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 23:55:06.999436 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:55:06.999442 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 23:55:06.999450 kernel: psci: probing for conduit method from ACPI. May 8 23:55:06.999457 kernel: psci: PSCIv1.1 detected in firmware. May 8 23:55:06.999464 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:55:06.999473 kernel: psci: Trusted OS migration not required May 8 23:55:06.999480 kernel: psci: SMC Calling Convention v1.1 May 8 23:55:06.999487 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 23:55:06.999495 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:55:06.999502 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:55:06.999510 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 23:55:06.999517 kernel: Detected PIPT I-cache on CPU0 May 8 23:55:06.999524 kernel: CPU features: detected: GIC system register CPU interface May 8 23:55:06.999530 kernel: CPU features: detected: Hardware dirty bit management May 8 23:55:06.999537 kernel: CPU features: detected: Spectre-v4 May 8 23:55:06.999544 kernel: CPU features: detected: Spectre-BHB May 8 23:55:06.999550 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 23:55:06.999557 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 23:55:06.999566 kernel: CPU features: detected: ARM erratum 1418040 May 8 23:55:06.999573 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 23:55:06.999579 kernel: alternatives: applying boot alternatives May 8 23:55:06.999587 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 8 23:55:06.999595 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:55:06.999602 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:55:06.999609 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:55:06.999616 kernel: Fallback order for Node 0: 0 May 8 23:55:06.999623 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 23:55:06.999629 kernel: Policy zone: DMA May 8 23:55:06.999636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:55:06.999644 kernel: software IO TLB: area num 4. May 8 23:55:06.999651 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 23:55:06.999659 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 8 23:55:06.999665 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 23:55:06.999672 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:55:06.999680 kernel: rcu: RCU event tracing is enabled. May 8 23:55:06.999687 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 23:55:06.999694 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:55:06.999706 kernel: Tracing variant of Tasks RCU enabled. May 8 23:55:06.999713 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:55:06.999720 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 23:55:06.999727 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:55:06.999737 kernel: GICv3: 256 SPIs implemented May 8 23:55:06.999744 kernel: GICv3: 0 Extended SPIs implemented May 8 23:55:06.999751 kernel: Root IRQ handler: gic_handle_irq May 8 23:55:06.999758 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 23:55:06.999765 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 23:55:06.999772 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 23:55:06.999779 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 23:55:06.999786 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 23:55:06.999793 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 23:55:06.999800 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 23:55:06.999807 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:55:06.999815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:06.999822 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 23:55:06.999829 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 23:55:06.999836 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 23:55:06.999843 kernel: arm-pv: using stolen time PV May 8 23:55:06.999850 kernel: Console: colour dummy device 80x25 May 8 23:55:06.999857 kernel: ACPI: Core revision 20230628 May 8 23:55:06.999864 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 23:55:06.999871 kernel: pid_max: default: 32768 minimum: 301 May 8 23:55:06.999877 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:55:06.999886 kernel: landlock: Up and running. May 8 23:55:06.999892 kernel: SELinux: Initializing. May 8 23:55:06.999899 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:55:06.999907 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:55:06.999914 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 8 23:55:06.999921 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:55:06.999929 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:55:06.999935 kernel: rcu: Hierarchical SRCU implementation. May 8 23:55:06.999942 kernel: rcu: Max phase no-delay instances is 400. May 8 23:55:06.999951 kernel: Platform MSI: ITS@0x8080000 domain created May 8 23:55:06.999957 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 23:55:06.999964 kernel: Remapping and enabling EFI services. May 8 23:55:06.999971 kernel: smp: Bringing up secondary CPUs ... May 8 23:55:06.999978 kernel: Detected PIPT I-cache on CPU1 May 8 23:55:06.999985 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 23:55:06.999992 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 23:55:07.000000 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:07.000007 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 23:55:07.000016 kernel: Detected PIPT I-cache on CPU2 May 8 23:55:07.000023 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 23:55:07.000031 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 23:55:07.000043 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:07.000053 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 23:55:07.000060 kernel: Detected PIPT I-cache on CPU3 May 8 23:55:07.000068 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 23:55:07.000075 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 23:55:07.000098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:55:07.000105 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 23:55:07.000113 kernel: smp: Brought up 1 node, 4 CPUs May 8 23:55:07.000122 kernel: SMP: Total of 4 processors activated. May 8 23:55:07.000130 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:55:07.000137 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 23:55:07.000145 kernel: CPU features: detected: Common not Private translations May 8 23:55:07.000152 kernel: CPU features: detected: CRC32 instructions May 8 23:55:07.000159 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 23:55:07.000168 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 23:55:07.000175 kernel: CPU features: detected: LSE atomic instructions May 8 23:55:07.000183 kernel: CPU features: detected: Privileged Access Never May 8 23:55:07.000191 kernel: CPU features: detected: RAS Extension Support May 8 23:55:07.000198 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 23:55:07.000205 kernel: CPU: All CPU(s) started at EL1 May 8 23:55:07.000213 kernel: alternatives: applying system-wide alternatives May 8 23:55:07.000220 kernel: devtmpfs: initialized May 8 23:55:07.000228 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:55:07.000237 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 23:55:07.000244 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:55:07.000251 kernel: SMBIOS 3.0.0 present. May 8 23:55:07.000258 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 8 23:55:07.000266 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:55:07.000273 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:55:07.000281 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:55:07.000288 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:55:07.000296 kernel: audit: initializing netlink subsys (disabled) May 8 23:55:07.000305 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 8 23:55:07.000312 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:55:07.000319 kernel: cpuidle: using governor menu May 8 23:55:07.000326 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:55:07.000333 kernel: ASID allocator initialised with 32768 entries May 8 23:55:07.000341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:55:07.000348 kernel: Serial: AMBA PL011 UART driver May 8 23:55:07.000355 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 23:55:07.000362 kernel: Modules: 0 pages in range for non-PLT usage May 8 23:55:07.000371 kernel: Modules: 509008 pages in range for PLT usage May 8 23:55:07.000378 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:55:07.000386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:55:07.000480 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:55:07.000492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:55:07.000500 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:55:07.000507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:55:07.000515 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:55:07.000522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:55:07.000532 kernel: ACPI: Added _OSI(Module Device) May 8 23:55:07.000540 kernel: ACPI: Added _OSI(Processor Device) May 8 23:55:07.000547 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:55:07.000554 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:55:07.000562 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:55:07.000569 kernel: ACPI: Interpreter enabled May 8 23:55:07.000576 kernel: ACPI: Using GIC for interrupt routing May 8 23:55:07.000583 kernel: ACPI: MCFG table detected, 1 entries May 8 23:55:07.000591 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 23:55:07.000598 kernel: printk: console [ttyAMA0] enabled May 8 23:55:07.000607 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 23:55:07.000778 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 23:55:07.000861 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 23:55:07.000932 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 23:55:07.000999 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 23:55:07.001067 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 23:55:07.001077 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 23:55:07.001089 kernel: PCI host bridge to bus 0000:00 May 8 23:55:07.001172 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 23:55:07.001235 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 23:55:07.001294 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 23:55:07.001354 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 23:55:07.001454 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 23:55:07.001544 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 23:55:07.001613 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 23:55:07.001683 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 23:55:07.001767 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:55:07.001842 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:55:07.001913 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 23:55:07.001982 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 23:55:07.002054 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 23:55:07.002116 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 23:55:07.002179 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 23:55:07.002189 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 23:55:07.002196 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 23:55:07.002204 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 23:55:07.002211 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 23:55:07.002218 kernel: iommu: Default domain type: Translated May 8 23:55:07.002229 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:55:07.002236 kernel: efivars: Registered efivars operations May 8 23:55:07.002244 kernel: vgaarb: loaded May 8 23:55:07.002252 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:55:07.002260 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:55:07.002268 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:55:07.002276 kernel: pnp: PnP ACPI init May 8 23:55:07.002360 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 23:55:07.002372 kernel: pnp: PnP ACPI: found 1 devices May 8 23:55:07.002382 kernel: NET: Registered PF_INET protocol family May 8 23:55:07.002390 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:55:07.002438 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:55:07.002447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:55:07.002454 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:55:07.002462 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:55:07.002470 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:55:07.002477 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:55:07.002487 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:55:07.002495 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:55:07.002502 kernel: PCI: CLS 0 bytes, default 64 May 8 23:55:07.002509 kernel: kvm [1]: HYP mode not available May 8 23:55:07.002516 kernel: Initialise system trusted keyrings May 8 23:55:07.002524 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:55:07.002531 kernel: Key type asymmetric registered May 8 23:55:07.002538 kernel: Asymmetric key parser 'x509' registered May 8 23:55:07.002546 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:55:07.002553 kernel: io scheduler mq-deadline registered May 8 23:55:07.002562 kernel: io scheduler kyber registered May 8 23:55:07.002569 kernel: io scheduler bfq registered May 8 23:55:07.002577 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 23:55:07.002584 kernel: ACPI: button: Power Button [PWRB] May 8 23:55:07.002592 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 23:55:07.002670 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 23:55:07.002681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:55:07.002689 kernel: thunder_xcv, ver 1.0 May 8 23:55:07.002696 kernel: thunder_bgx, ver 1.0 May 8 23:55:07.002713 kernel: nicpf, ver 1.0 May 8 23:55:07.002721 kernel: nicvf, ver 1.0 May 8 23:55:07.002803 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:55:07.002869 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:55:06 UTC (1746748506) May 8 23:55:07.002879 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:55:07.002887 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 23:55:07.002894 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:55:07.002902 kernel: watchdog: Hard watchdog permanently disabled May 8 23:55:07.002913 kernel: NET: Registered PF_INET6 protocol family May 8 23:55:07.002920 kernel: Segment Routing with IPv6 May 8 23:55:07.002927 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:55:07.002934 kernel: NET: Registered PF_PACKET protocol family May 8 23:55:07.002941 kernel: Key type dns_resolver registered May 8 23:55:07.002949 kernel: registered taskstats version 1 May 8 23:55:07.002956 kernel: Loading compiled-in X.509 certificates May 8 23:55:07.002964 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 7944e0e0bec5e8cad487856da19569eba337cea0' May 8 23:55:07.002971 kernel: Key type .fscrypt registered May 8 23:55:07.002981 kernel: Key type fscrypt-provisioning registered May 8 23:55:07.002988 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:55:07.002996 kernel: ima: Allocated hash algorithm: sha1 May 8 23:55:07.003003 kernel: ima: No architecture policies found May 8 23:55:07.003011 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:55:07.003018 kernel: clk: Disabling unused clocks May 8 23:55:07.003026 kernel: Freeing unused kernel memory: 39424K May 8 23:55:07.003034 kernel: Run /init as init process May 8 23:55:07.003041 kernel: with arguments: May 8 23:55:07.003050 kernel: /init May 8 23:55:07.003057 kernel: with environment: May 8 23:55:07.003064 kernel: HOME=/ May 8 23:55:07.003071 kernel: TERM=linux May 8 23:55:07.003078 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:55:07.003087 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:55:07.003097 systemd[1]: Detected virtualization kvm. May 8 23:55:07.003107 systemd[1]: Detected architecture arm64. May 8 23:55:07.003114 systemd[1]: Running in initrd. May 8 23:55:07.003122 systemd[1]: No hostname configured, using default hostname. May 8 23:55:07.003130 systemd[1]: Hostname set to . May 8 23:55:07.003138 systemd[1]: Initializing machine ID from VM UUID. May 8 23:55:07.003146 systemd[1]: Queued start job for default target initrd.target. May 8 23:55:07.003154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:55:07.003162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:55:07.003172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:55:07.003180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:55:07.003188 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:55:07.003196 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:55:07.003206 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:55:07.003214 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:55:07.003223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:55:07.003233 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:55:07.003241 systemd[1]: Reached target paths.target - Path Units. May 8 23:55:07.003250 systemd[1]: Reached target slices.target - Slice Units. May 8 23:55:07.003258 systemd[1]: Reached target swap.target - Swaps. May 8 23:55:07.003266 systemd[1]: Reached target timers.target - Timer Units. May 8 23:55:07.003274 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:55:07.003282 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:55:07.003290 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:55:07.003298 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:55:07.003308 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:55:07.003316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:55:07.003324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:55:07.003332 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:55:07.003341 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:55:07.003349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:55:07.003357 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:55:07.003365 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:55:07.003374 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:55:07.003383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:55:07.003392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:07.003411 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:55:07.003419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:55:07.003427 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:55:07.003438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:55:07.003466 systemd-journald[239]: Collecting audit messages is disabled. May 8 23:55:07.003487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:07.003497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:55:07.003506 systemd-journald[239]: Journal started May 8 23:55:07.003525 systemd-journald[239]: Runtime Journal (/run/log/journal/cd19ff2562244adabe87d32b6cc113b5) is 5.9M, max 47.3M, 41.4M free. May 8 23:55:06.995557 systemd-modules-load[240]: Inserted module 'overlay' May 8 23:55:07.009195 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:55:07.010039 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:55:07.012067 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:55:07.015241 systemd-modules-load[240]: Inserted module 'br_netfilter' May 8 23:55:07.016157 kernel: Bridge firewalling registered May 8 23:55:07.024813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:55:07.027964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:55:07.029638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:55:07.032292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:55:07.041718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:55:07.044666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:55:07.048326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:07.049935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:55:07.062605 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:55:07.065033 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:55:07.073315 dracut-cmdline[276]: dracut-dracut-053 May 8 23:55:07.076164 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8e29bd932c31237847976018676f554a4d09fa105e08b3bc01bcbb09708aefd3 May 8 23:55:07.095560 systemd-resolved[278]: Positive Trust Anchors: May 8 23:55:07.095579 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:55:07.095610 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:55:07.100791 systemd-resolved[278]: Defaulting to hostname 'linux'. May 8 23:55:07.103750 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:55:07.105165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:55:07.151432 kernel: SCSI subsystem initialized May 8 23:55:07.156415 kernel: Loading iSCSI transport class v2.0-870. May 8 23:55:07.164432 kernel: iscsi: registered transport (tcp) May 8 23:55:07.179424 kernel: iscsi: registered transport (qla4xxx) May 8 23:55:07.179455 kernel: QLogic iSCSI HBA Driver May 8 23:55:07.224211 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:55:07.235561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:55:07.252732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:55:07.252812 kernel: device-mapper: uevent: version 1.0.3 May 8 23:55:07.256454 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:55:07.301427 kernel: raid6: neonx8 gen() 15771 MB/s May 8 23:55:07.318421 kernel: raid6: neonx4 gen() 15645 MB/s May 8 23:55:07.335416 kernel: raid6: neonx2 gen() 13265 MB/s May 8 23:55:07.352413 kernel: raid6: neonx1 gen() 10473 MB/s May 8 23:55:07.369414 kernel: raid6: int64x8 gen() 6965 MB/s May 8 23:55:07.386413 kernel: raid6: int64x4 gen() 7344 MB/s May 8 23:55:07.403418 kernel: raid6: int64x2 gen() 6127 MB/s May 8 23:55:07.420552 kernel: raid6: int64x1 gen() 5050 MB/s May 8 23:55:07.420606 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s May 8 23:55:07.438531 kernel: raid6: .... xor() 11919 MB/s, rmw enabled May 8 23:55:07.438551 kernel: raid6: using neon recovery algorithm May 8 23:55:07.443415 kernel: xor: measuring software checksum speed May 8 23:55:07.444649 kernel: 8regs : 17552 MB/sec May 8 23:55:07.444666 kernel: 32regs : 19631 MB/sec May 8 23:55:07.445904 kernel: arm64_neon : 25964 MB/sec May 8 23:55:07.445916 kernel: xor: using function: arm64_neon (25964 MB/sec) May 8 23:55:07.502429 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:55:07.516210 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:55:07.525618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:55:07.539845 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 8 23:55:07.543253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:55:07.549603 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:55:07.565113 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 8 23:55:07.594861 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:55:07.603563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:55:07.646892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:55:07.656593 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:55:07.670528 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:55:07.672541 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:55:07.674623 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:55:07.677081 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:55:07.685950 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:55:07.697016 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 23:55:07.701829 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 23:55:07.702043 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:55:07.706064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:55:07.706170 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:07.716575 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 23:55:07.716599 kernel: GPT:9289727 != 19775487 May 8 23:55:07.716610 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 23:55:07.716620 kernel: GPT:9289727 != 19775487 May 8 23:55:07.716637 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 23:55:07.716647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:55:07.716600 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:55:07.718512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:55:07.718674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:07.721378 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:07.731416 kernel: BTRFS: device fsid 9a510efc-c158-4845-bfb8-279f8b20070f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (514) May 8 23:55:07.731445 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (515) May 8 23:55:07.736883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:07.748486 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 23:55:07.753934 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 23:55:07.755491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:07.767057 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:55:07.771112 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 23:55:07.772453 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 23:55:07.791604 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:55:07.793586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:55:07.816548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:07.828431 disk-uuid[556]: Primary Header is updated. May 8 23:55:07.828431 disk-uuid[556]: Secondary Entries is updated. May 8 23:55:07.828431 disk-uuid[556]: Secondary Header is updated. May 8 23:55:07.832427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:55:08.849437 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:55:08.850261 disk-uuid[565]: The operation has completed successfully. May 8 23:55:08.869724 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:55:08.869823 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:55:08.892561 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:55:08.895820 sh[576]: Success May 8 23:55:08.916194 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:55:08.953176 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:55:08.969179 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:55:08.972468 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:55:08.982836 kernel: BTRFS info (device dm-0): first mount of filesystem 9a510efc-c158-4845-bfb8-279f8b20070f May 8 23:55:08.982889 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:08.982901 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:55:08.984894 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:55:08.984911 kernel: BTRFS info (device dm-0): using free space tree May 8 23:55:08.989360 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:55:08.991036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:55:09.007618 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:55:09.009254 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:55:09.017919 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:09.017969 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:09.017981 kernel: BTRFS info (device vda6): using free space tree May 8 23:55:09.020434 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:55:09.029408 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:55:09.031310 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:09.037156 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:55:09.045687 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:55:09.133517 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:55:09.141592 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:55:09.150171 ignition[667]: Ignition 2.19.0 May 8 23:55:09.150184 ignition[667]: Stage: fetch-offline May 8 23:55:09.150224 ignition[667]: no configs at "/usr/lib/ignition/base.d" May 8 23:55:09.150232 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:09.150412 ignition[667]: parsed url from cmdline: "" May 8 23:55:09.150416 ignition[667]: no config URL provided May 8 23:55:09.150421 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:55:09.150428 ignition[667]: no config at "/usr/lib/ignition/user.ign" May 8 23:55:09.150456 ignition[667]: op(1): [started] loading QEMU firmware config module May 8 23:55:09.150461 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 23:55:09.163025 ignition[667]: op(1): [finished] loading QEMU firmware config module May 8 23:55:09.168217 systemd-networkd[766]: lo: Link UP May 8 23:55:09.168230 systemd-networkd[766]: lo: Gained carrier May 8 23:55:09.169160 systemd-networkd[766]: Enumeration completed May 8 23:55:09.169457 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:55:09.171302 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:09.171305 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:55:09.172212 systemd-networkd[766]: eth0: Link UP May 8 23:55:09.172215 systemd-networkd[766]: eth0: Gained carrier May 8 23:55:09.172223 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:09.172257 systemd[1]: Reached target network.target - Network. May 8 23:55:09.186452 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:55:09.214982 ignition[667]: parsing config with SHA512: 4e251e1358782d866a2abc3e0ecca1d1ee4549455284e47a034b523814173e5164c14d8e165c57d4389d45f8383d6afb0e268c7ab21ff9f734627dcdadbf106c May 8 23:55:09.219135 unknown[667]: fetched base config from "system" May 8 23:55:09.219144 unknown[667]: fetched user config from "qemu" May 8 23:55:09.219588 ignition[667]: fetch-offline: fetch-offline passed May 8 23:55:09.219650 ignition[667]: Ignition finished successfully May 8 23:55:09.221903 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:55:09.223957 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 23:55:09.237611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:55:09.248852 ignition[772]: Ignition 2.19.0 May 8 23:55:09.248863 ignition[772]: Stage: kargs May 8 23:55:09.249047 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 8 23:55:09.249058 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:09.250023 ignition[772]: kargs: kargs passed May 8 23:55:09.250081 ignition[772]: Ignition finished successfully May 8 23:55:09.254054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:55:09.266617 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:55:09.276930 ignition[780]: Ignition 2.19.0 May 8 23:55:09.276940 ignition[780]: Stage: disks May 8 23:55:09.277110 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 8 23:55:09.279782 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:55:09.277120 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:09.281288 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:55:09.278100 ignition[780]: disks: disks passed May 8 23:55:09.282903 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:55:09.278152 ignition[780]: Ignition finished successfully May 8 23:55:09.284914 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:55:09.286689 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:55:09.288104 systemd[1]: Reached target basic.target - Basic System. May 8 23:55:09.298583 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:55:09.310388 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 23:55:09.314740 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:55:09.322517 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:55:09.368421 kernel: EXT4-fs (vda9): mounted filesystem 1a8c7c5d-87ec-4bc4-aa01-1ebc1d3c20e7 r/w with ordered data mode. Quota mode: none. May 8 23:55:09.369212 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:55:09.370557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:55:09.385507 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:55:09.387315 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:55:09.388770 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 23:55:09.388815 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:55:09.395369 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) May 8 23:55:09.395405 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:09.388840 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:55:09.399918 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:09.399941 kernel: BTRFS info (device vda6): using free space tree May 8 23:55:09.399951 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:55:09.396377 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:55:09.402245 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:55:09.404621 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:55:09.450248 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:55:09.455352 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 8 23:55:09.459921 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:55:09.462854 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:55:09.537043 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:55:09.550521 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:55:09.552181 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:55:09.558426 kernel: BTRFS info (device vda6): last unmount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:09.574308 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:55:09.576277 ignition[913]: INFO : Ignition 2.19.0 May 8 23:55:09.576277 ignition[913]: INFO : Stage: mount May 8 23:55:09.576277 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:55:09.576277 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:09.580634 ignition[913]: INFO : mount: mount passed May 8 23:55:09.580634 ignition[913]: INFO : Ignition finished successfully May 8 23:55:09.578479 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:55:09.585513 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:55:09.981665 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:55:09.990674 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:55:09.998426 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) May 8 23:55:10.001077 kernel: BTRFS info (device vda6): first mount of filesystem 9e7e8c5a-aee3-4b23-ab26-fabdbd68734c May 8 23:55:10.001114 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:55:10.001126 kernel: BTRFS info (device vda6): using free space tree May 8 23:55:10.004426 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:55:10.005118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:55:10.022574 ignition[944]: INFO : Ignition 2.19.0 May 8 23:55:10.022574 ignition[944]: INFO : Stage: files May 8 23:55:10.024134 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:55:10.024134 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:10.024134 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 8 23:55:10.027498 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:55:10.027498 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:55:10.030768 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:55:10.032102 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:55:10.032102 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:55:10.031374 unknown[944]: wrote ssh authorized keys file for user: core May 8 23:55:10.035846 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 23:55:10.035846 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 8 23:55:10.870593 systemd-networkd[766]: eth0: Gained IPv6LL May 8 23:55:11.140663 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 23:55:13.488460 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 23:55:13.490745 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:55:13.490745 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 23:55:13.823785 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 23:55:13.909783 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:55:13.909783 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:55:13.913226 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 8 23:55:14.184277 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 23:55:14.501025 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:55:14.501025 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 23:55:14.504814 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 23:55:14.524306 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:55:14.528356 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:55:14.531099 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 23:55:14.531099 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 23:55:14.531099 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 23:55:14.531099 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:55:14.531099 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:55:14.531099 ignition[944]: INFO : files: files passed May 8 23:55:14.531099 ignition[944]: INFO : Ignition finished successfully May 8 23:55:14.533249 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:55:14.554564 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:55:14.557776 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:55:14.560389 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:55:14.561474 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:55:14.565857 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 8 23:55:14.569280 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:55:14.569280 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:55:14.572876 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:55:14.573303 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:55:14.575797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:55:14.585570 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:55:14.606710 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:55:14.607491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:55:14.609099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:55:14.610910 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:55:14.612643 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:55:14.613511 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:55:14.636089 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:55:14.646592 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:55:14.656269 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:55:14.657663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:55:14.659717 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:55:14.661389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:55:14.661548 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:55:14.664016 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:55:14.666067 systemd[1]: Stopped target basic.target - Basic System. May 8 23:55:14.667663 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:55:14.669372 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:55:14.671330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:55:14.673317 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:55:14.675139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:55:14.677098 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:55:14.679155 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:55:14.680912 systemd[1]: Stopped target swap.target - Swaps. May 8 23:55:14.682472 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:55:14.682609 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:55:14.685026 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:55:14.687090 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:55:14.689179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:55:14.692460 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:55:14.693781 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:55:14.693919 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:55:14.696782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:55:14.696911 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:55:14.698946 systemd[1]: Stopped target paths.target - Path Units. May 8 23:55:14.700538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:55:14.705457 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:55:14.706727 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:55:14.708728 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:55:14.710213 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:55:14.710308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:55:14.711801 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:55:14.711887 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:55:14.713471 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:55:14.713594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:55:14.719537 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:55:14.719646 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:55:14.733614 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:55:14.734578 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:55:14.734733 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:55:14.737488 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:55:14.738316 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:55:14.738479 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:55:14.740687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:55:14.740802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:55:14.747967 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:55:14.749023 ignition[998]: INFO : Ignition 2.19.0 May 8 23:55:14.749023 ignition[998]: INFO : Stage: umount May 8 23:55:14.752425 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:55:14.752425 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:55:14.752425 ignition[998]: INFO : umount: umount passed May 8 23:55:14.752425 ignition[998]: INFO : Ignition finished successfully May 8 23:55:14.749426 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:55:14.752566 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:55:14.752687 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:55:14.754520 systemd[1]: Stopped target network.target - Network. May 8 23:55:14.757511 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:55:14.757588 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:55:14.759174 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:55:14.759223 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:55:14.760894 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:55:14.760945 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:55:14.763111 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:55:14.763159 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:55:14.765053 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:55:14.770372 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:55:14.775858 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:55:14.780478 systemd-networkd[766]: eth0: DHCPv6 lease lost May 8 23:55:14.780542 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:55:14.780657 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:55:14.783430 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:55:14.783698 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:55:14.786712 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:55:14.786789 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:55:14.806524 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:55:14.807488 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:55:14.807564 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:55:14.809488 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:55:14.809540 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:55:14.811415 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:55:14.811468 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:55:14.813450 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:55:14.813511 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:55:14.815664 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:55:14.817753 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:55:14.819167 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:55:14.825698 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:55:14.825832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:55:14.828986 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:55:14.829044 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:55:14.830906 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:55:14.830944 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:55:14.832604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:55:14.832658 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:55:14.835282 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:55:14.835332 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:55:14.837824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:55:14.837873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:55:14.840506 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:55:14.840555 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:55:14.862609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:55:14.863661 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:55:14.863738 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:55:14.865900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:55:14.865947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:14.868119 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:55:14.869426 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:55:14.870884 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:55:14.870973 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:55:14.873449 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:55:14.876008 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:55:14.886656 systemd[1]: Switching root. May 8 23:55:14.916658 systemd-journald[239]: Journal stopped May 8 23:55:15.648587 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 8 23:55:15.648641 kernel: SELinux: policy capability network_peer_controls=1 May 8 23:55:15.648653 kernel: SELinux: policy capability open_perms=1 May 8 23:55:15.648672 kernel: SELinux: policy capability extended_socket_class=1 May 8 23:55:15.648685 kernel: SELinux: policy capability always_check_network=0 May 8 23:55:15.648694 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 23:55:15.648704 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 23:55:15.648716 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 23:55:15.648731 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 23:55:15.648740 kernel: audit: type=1403 audit(1746748515.079:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 23:55:15.648751 systemd[1]: Successfully loaded SELinux policy in 37.743ms. May 8 23:55:15.648768 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.858ms. May 8 23:55:15.648780 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:55:15.648791 systemd[1]: Detected virtualization kvm. May 8 23:55:15.648802 systemd[1]: Detected architecture arm64. May 8 23:55:15.648814 systemd[1]: Detected first boot. May 8 23:55:15.648826 systemd[1]: Initializing machine ID from VM UUID. May 8 23:55:15.648837 zram_generator::config[1043]: No configuration found. May 8 23:55:15.648848 systemd[1]: Populated /etc with preset unit settings. May 8 23:55:15.648859 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 23:55:15.648869 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 23:55:15.648880 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 23:55:15.648891 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 23:55:15.648902 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 23:55:15.648915 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 23:55:15.648926 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 23:55:15.648937 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 23:55:15.648947 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 23:55:15.648958 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 23:55:15.648968 systemd[1]: Created slice user.slice - User and Session Slice. May 8 23:55:15.648979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:55:15.648989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:55:15.649000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 23:55:15.649012 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 23:55:15.649022 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 23:55:15.649033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:55:15.649045 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 23:55:15.649055 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:55:15.649065 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 23:55:15.649076 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 23:55:15.649087 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 23:55:15.649099 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 23:55:15.649110 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:55:15.649120 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:55:15.649135 systemd[1]: Reached target slices.target - Slice Units. May 8 23:55:15.649146 systemd[1]: Reached target swap.target - Swaps. May 8 23:55:15.649157 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 23:55:15.649167 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 23:55:15.649178 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:55:15.649188 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:55:15.649200 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:55:15.649211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 23:55:15.649221 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 23:55:15.649231 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 23:55:15.649242 systemd[1]: Mounting media.mount - External Media Directory... May 8 23:55:15.649252 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 23:55:15.649262 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 23:55:15.649274 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 23:55:15.649285 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 23:55:15.649297 systemd[1]: Reached target machines.target - Containers. May 8 23:55:15.649307 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 23:55:15.649318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:15.649328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:55:15.649339 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 23:55:15.649349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:15.649360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:55:15.649370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:15.649382 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 23:55:15.649393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:15.649413 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 23:55:15.649423 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 23:55:15.649446 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 23:55:15.649457 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 23:55:15.649468 systemd[1]: Stopped systemd-fsck-usr.service. May 8 23:55:15.649478 kernel: fuse: init (API version 7.39) May 8 23:55:15.649489 kernel: loop: module loaded May 8 23:55:15.649499 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:55:15.649509 kernel: ACPI: bus type drm_connector registered May 8 23:55:15.649520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:55:15.649531 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 23:55:15.649541 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 23:55:15.649552 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:55:15.649580 systemd-journald[1114]: Collecting audit messages is disabled. May 8 23:55:15.649604 systemd[1]: verity-setup.service: Deactivated successfully. May 8 23:55:15.649615 systemd[1]: Stopped verity-setup.service. May 8 23:55:15.649625 systemd-journald[1114]: Journal started May 8 23:55:15.649646 systemd-journald[1114]: Runtime Journal (/run/log/journal/cd19ff2562244adabe87d32b6cc113b5) is 5.9M, max 47.3M, 41.4M free. May 8 23:55:15.452825 systemd[1]: Queued start job for default target multi-user.target. May 8 23:55:15.466631 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 23:55:15.467052 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 23:55:15.653938 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:55:15.654558 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 23:55:15.655702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 23:55:15.656863 systemd[1]: Mounted media.mount - External Media Directory. May 8 23:55:15.658014 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 23:55:15.659201 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 23:55:15.660463 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 23:55:15.663448 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 23:55:15.664789 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:55:15.667758 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 23:55:15.667892 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 23:55:15.669245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:15.669375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:15.670717 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:55:15.670850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:55:15.672108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:15.672247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:15.673762 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 23:55:15.673886 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 23:55:15.675195 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:15.675342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:15.676681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:55:15.678091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 23:55:15.679590 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 23:55:15.691791 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 23:55:15.700512 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 23:55:15.702600 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 23:55:15.703743 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 23:55:15.703777 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:55:15.705762 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 23:55:15.707944 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 23:55:15.710119 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 23:55:15.711281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:15.712693 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 23:55:15.716843 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 23:55:15.718066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:55:15.719613 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 23:55:15.720836 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:55:15.722644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:55:15.724809 systemd-journald[1114]: Time spent on flushing to /var/log/journal/cd19ff2562244adabe87d32b6cc113b5 is 20.745ms for 857 entries. May 8 23:55:15.724809 systemd-journald[1114]: System Journal (/var/log/journal/cd19ff2562244adabe87d32b6cc113b5) is 8.0M, max 195.6M, 187.6M free. May 8 23:55:15.759088 systemd-journald[1114]: Received client request to flush runtime journal. May 8 23:55:15.759143 kernel: loop0: detected capacity change from 0 to 114432 May 8 23:55:15.728787 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 23:55:15.733646 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 23:55:15.736285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:55:15.738140 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 23:55:15.739592 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 23:55:15.742502 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 23:55:15.744213 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 23:55:15.749384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 23:55:15.757702 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 23:55:15.763413 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 23:55:15.760594 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 23:55:15.763127 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 23:55:15.777301 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 23:55:15.783152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:55:15.795147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 23:55:15.798912 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 23:55:15.806384 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 23:55:15.810426 kernel: loop1: detected capacity change from 0 to 114328 May 8 23:55:15.820727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:55:15.847424 kernel: loop2: detected capacity change from 0 to 201592 May 8 23:55:15.850463 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 8 23:55:15.850482 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 8 23:55:15.855138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:55:15.881473 kernel: loop3: detected capacity change from 0 to 114432 May 8 23:55:15.886474 kernel: loop4: detected capacity change from 0 to 114328 May 8 23:55:15.891426 kernel: loop5: detected capacity change from 0 to 201592 May 8 23:55:15.895361 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 23:55:15.896443 (sd-merge)[1181]: Merged extensions into '/usr'. May 8 23:55:15.899958 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 8 23:55:15.899976 systemd[1]: Reloading... May 8 23:55:15.948531 zram_generator::config[1207]: No configuration found. May 8 23:55:15.998935 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 23:55:16.053817 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:16.089781 systemd[1]: Reloading finished in 189 ms. May 8 23:55:16.120435 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 23:55:16.121874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 23:55:16.134741 systemd[1]: Starting ensure-sysext.service... May 8 23:55:16.136826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:55:16.143475 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... May 8 23:55:16.143492 systemd[1]: Reloading... May 8 23:55:16.154889 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 23:55:16.155491 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 23:55:16.156166 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 23:55:16.156365 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 23:55:16.156429 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 8 23:55:16.158908 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:55:16.159456 systemd-tmpfiles[1243]: Skipping /boot May 8 23:55:16.171743 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:55:16.171758 systemd-tmpfiles[1243]: Skipping /boot May 8 23:55:16.191466 zram_generator::config[1270]: No configuration found. May 8 23:55:16.278078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:16.315165 systemd[1]: Reloading finished in 171 ms. May 8 23:55:16.330587 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 23:55:16.338880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:55:16.348990 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 23:55:16.351694 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 23:55:16.354718 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 23:55:16.360715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:55:16.368759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:55:16.373775 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 23:55:16.376897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:16.378076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:16.383716 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:16.392712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:16.393865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:16.394702 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 23:55:16.396803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:16.396943 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:16.398509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:16.398629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:16.412258 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:16.412427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:16.415203 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 23:55:16.417024 systemd-udevd[1317]: Using default interface naming scheme 'v255'. May 8 23:55:16.417180 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 23:55:16.423506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:55:16.429766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:55:16.432126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:55:16.435833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:55:16.438599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:55:16.439777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:55:16.443633 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 23:55:16.446696 augenrules[1341]: No rules May 8 23:55:16.447595 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 23:55:16.449515 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:55:16.450698 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 23:55:16.452505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:55:16.452639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:55:16.454973 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:55:16.457388 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:55:16.458959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:55:16.460981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:55:16.461162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:55:16.465390 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:55:16.465542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:55:16.468085 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 23:55:16.474453 systemd[1]: Finished ensure-sysext.service. May 8 23:55:16.500586 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:55:16.502481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:55:16.502568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:55:16.505386 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 23:55:16.506885 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 23:55:16.509341 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 23:55:16.575393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1363) May 8 23:55:16.575540 systemd-networkd[1374]: lo: Link UP May 8 23:55:16.575893 systemd-networkd[1374]: lo: Gained carrier May 8 23:55:16.576178 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 23:55:16.578353 systemd[1]: Reached target time-set.target - System Time Set. May 8 23:55:16.578861 systemd-networkd[1374]: Enumeration completed May 8 23:55:16.580108 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:55:16.584113 systemd-resolved[1311]: Positive Trust Anchors: May 8 23:55:16.584131 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:55:16.584165 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:55:16.586917 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:16.586920 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:55:16.587697 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:16.587731 systemd-networkd[1374]: eth0: Link UP May 8 23:55:16.587734 systemd-networkd[1374]: eth0: Gained carrier May 8 23:55:16.587743 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:55:16.592142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 23:55:16.600517 systemd-resolved[1311]: Defaulting to hostname 'linux'. May 8 23:55:16.602535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:55:16.604449 systemd[1]: Reached target network.target - Network. May 8 23:55:16.605351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:55:16.607506 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:55:16.608498 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. May 8 23:55:17.025182 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 23:55:17.025259 systemd-timesyncd[1379]: Initial clock synchronization to Thu 2025-05-08 23:55:17.025004 UTC. May 8 23:55:17.025335 systemd-resolved[1311]: Clock change detected. Flushing caches. May 8 23:55:17.032958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:55:17.043466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 23:55:17.050903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:55:17.062564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 23:55:17.068566 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 23:55:17.079434 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 23:55:17.098818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:55:17.104107 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:55:17.133901 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 23:55:17.135416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:55:17.136524 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:55:17.137676 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 23:55:17.138859 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 23:55:17.140280 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 23:55:17.141383 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 23:55:17.142730 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 23:55:17.143922 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 23:55:17.143965 systemd[1]: Reached target paths.target - Path Units. May 8 23:55:17.144891 systemd[1]: Reached target timers.target - Timer Units. May 8 23:55:17.146585 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 23:55:17.149076 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 23:55:17.156391 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 23:55:17.159106 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 23:55:17.160843 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 23:55:17.162063 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:55:17.163049 systemd[1]: Reached target basic.target - Basic System. May 8 23:55:17.164126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 23:55:17.164157 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 23:55:17.165254 systemd[1]: Starting containerd.service - containerd container runtime... May 8 23:55:17.167460 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 23:55:17.170408 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:55:17.171425 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 23:55:17.175347 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 23:55:17.176382 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 23:55:17.177285 jq[1410]: false May 8 23:55:17.178881 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 23:55:17.184441 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 23:55:17.187057 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 23:55:17.194063 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 23:55:17.197743 extend-filesystems[1411]: Found loop3 May 8 23:55:17.199745 extend-filesystems[1411]: Found loop4 May 8 23:55:17.199745 extend-filesystems[1411]: Found loop5 May 8 23:55:17.199745 extend-filesystems[1411]: Found vda May 8 23:55:17.199745 extend-filesystems[1411]: Found vda1 May 8 23:55:17.199745 extend-filesystems[1411]: Found vda2 May 8 23:55:17.199745 extend-filesystems[1411]: Found vda3 May 8 23:55:17.199745 extend-filesystems[1411]: Found usr May 8 23:55:17.199745 extend-filesystems[1411]: Found vda4 May 8 23:55:17.199745 extend-filesystems[1411]: Found vda6 May 8 23:55:17.199745 extend-filesystems[1411]: Found vda7 May 8 23:55:17.199745 extend-filesystems[1411]: Found vda9 May 8 23:55:17.199745 extend-filesystems[1411]: Checking size of /dev/vda9 May 8 23:55:17.246340 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1362) May 8 23:55:17.246378 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 23:55:17.199328 dbus-daemon[1409]: [system] SELinux support is enabled May 8 23:55:17.200179 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 23:55:17.248475 extend-filesystems[1411]: Resized partition /dev/vda9 May 8 23:55:17.206627 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 23:55:17.253587 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) May 8 23:55:17.207206 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 23:55:17.210478 systemd[1]: Starting update-engine.service - Update Engine... May 8 23:55:17.214442 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 23:55:17.260896 jq[1431]: true May 8 23:55:17.222965 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 23:55:17.263432 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 23:55:17.226293 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 23:55:17.230361 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 23:55:17.230545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 23:55:17.283905 jq[1436]: true May 8 23:55:17.230820 systemd[1]: motdgen.service: Deactivated successfully. May 8 23:55:17.285568 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 23:55:17.285568 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 23:55:17.285568 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 23:55:17.230964 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 23:55:17.294663 extend-filesystems[1411]: Resized filesystem in /dev/vda9 May 8 23:55:17.303398 update_engine[1426]: I20250508 23:55:17.285558 1426 main.cc:92] Flatcar Update Engine starting May 8 23:55:17.303398 update_engine[1426]: I20250508 23:55:17.297289 1426 update_check_scheduler.cc:74] Next update check in 3m41s May 8 23:55:17.234692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 23:55:17.313660 tar[1435]: linux-arm64/LICENSE May 8 23:55:17.313660 tar[1435]: linux-arm64/helm May 8 23:55:17.234850 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 23:55:17.249423 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 23:55:17.257131 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 23:55:17.257156 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 23:55:17.258967 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 23:55:17.258987 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 23:55:17.285123 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 23:55:17.287332 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 23:55:17.288270 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) May 8 23:55:17.290158 systemd-logind[1422]: New seat seat0. May 8 23:55:17.296410 systemd[1]: Started systemd-logind.service - User Login Management. May 8 23:55:17.299492 systemd[1]: Started update-engine.service - Update Engine. May 8 23:55:17.313096 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 23:55:17.325895 bash[1465]: Updated "/home/core/.ssh/authorized_keys" May 8 23:55:17.327824 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 23:55:17.329824 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 23:55:17.358428 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 23:55:17.456010 containerd[1439]: time="2025-05-08T23:55:17.455743038Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 23:55:17.459405 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 23:55:17.482299 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 23:55:17.483052 containerd[1439]: time="2025-05-08T23:55:17.483006598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.484684 containerd[1439]: time="2025-05-08T23:55:17.484634278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:17.484835 containerd[1439]: time="2025-05-08T23:55:17.484817438Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 23:55:17.484920 containerd[1439]: time="2025-05-08T23:55:17.484903518Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 23:55:17.485202 containerd[1439]: time="2025-05-08T23:55:17.485179918Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 23:55:17.485370 containerd[1439]: time="2025-05-08T23:55:17.485350318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.485571 containerd[1439]: time="2025-05-08T23:55:17.485485198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:17.485637 containerd[1439]: time="2025-05-08T23:55:17.485624078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.486024 containerd[1439]: time="2025-05-08T23:55:17.485950038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:17.486103 containerd[1439]: time="2025-05-08T23:55:17.486087198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.486158 containerd[1439]: time="2025-05-08T23:55:17.486144278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:17.486696 containerd[1439]: time="2025-05-08T23:55:17.486292118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.486696 containerd[1439]: time="2025-05-08T23:55:17.486400678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.486696 containerd[1439]: time="2025-05-08T23:55:17.486655038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 23:55:17.487088 containerd[1439]: time="2025-05-08T23:55:17.487062838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:55:17.487220 containerd[1439]: time="2025-05-08T23:55:17.487203398Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 23:55:17.487392 containerd[1439]: time="2025-05-08T23:55:17.487371358Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 23:55:17.487590 containerd[1439]: time="2025-05-08T23:55:17.487569278Z" level=info msg="metadata content store policy set" policy=shared May 8 23:55:17.489633 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 23:55:17.493993 containerd[1439]: time="2025-05-08T23:55:17.493753558Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 23:55:17.493993 containerd[1439]: time="2025-05-08T23:55:17.493817518Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 23:55:17.493993 containerd[1439]: time="2025-05-08T23:55:17.493833598Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 23:55:17.493993 containerd[1439]: time="2025-05-08T23:55:17.493850238Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 23:55:17.493993 containerd[1439]: time="2025-05-08T23:55:17.493864758Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 23:55:17.494147 containerd[1439]: time="2025-05-08T23:55:17.494025678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 23:55:17.494351 containerd[1439]: time="2025-05-08T23:55:17.494317078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 23:55:17.494511 containerd[1439]: time="2025-05-08T23:55:17.494476118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 23:55:17.494546 containerd[1439]: time="2025-05-08T23:55:17.494512638Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 23:55:17.494546 containerd[1439]: time="2025-05-08T23:55:17.494528958Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 23:55:17.494581 containerd[1439]: time="2025-05-08T23:55:17.494550998Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494581 containerd[1439]: time="2025-05-08T23:55:17.494568398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494623 containerd[1439]: time="2025-05-08T23:55:17.494581798Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494623 containerd[1439]: time="2025-05-08T23:55:17.494596558Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494623 containerd[1439]: time="2025-05-08T23:55:17.494611798Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494674 containerd[1439]: time="2025-05-08T23:55:17.494625558Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494674 containerd[1439]: time="2025-05-08T23:55:17.494639198Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494674 containerd[1439]: time="2025-05-08T23:55:17.494652398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 23:55:17.494674 containerd[1439]: time="2025-05-08T23:55:17.494672478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494741 containerd[1439]: time="2025-05-08T23:55:17.494690718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494741 containerd[1439]: time="2025-05-08T23:55:17.494706878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494741 containerd[1439]: time="2025-05-08T23:55:17.494720398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494741 containerd[1439]: time="2025-05-08T23:55:17.494733958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494747958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494760078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494772398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494784638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494800278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494812078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494824998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494842 containerd[1439]: time="2025-05-08T23:55:17.494836718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494973 containerd[1439]: time="2025-05-08T23:55:17.494852158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 23:55:17.494973 containerd[1439]: time="2025-05-08T23:55:17.494876998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494973 containerd[1439]: time="2025-05-08T23:55:17.494889158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 23:55:17.494973 containerd[1439]: time="2025-05-08T23:55:17.494899478Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 23:55:17.495046 containerd[1439]: time="2025-05-08T23:55:17.495015638Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 23:55:17.495046 containerd[1439]: time="2025-05-08T23:55:17.495034358Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 23:55:17.495086 containerd[1439]: time="2025-05-08T23:55:17.495045318Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 23:55:17.495086 containerd[1439]: time="2025-05-08T23:55:17.495056758Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 23:55:17.495086 containerd[1439]: time="2025-05-08T23:55:17.495066758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 23:55:17.495086 containerd[1439]: time="2025-05-08T23:55:17.495078478Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 23:55:17.495154 containerd[1439]: time="2025-05-08T23:55:17.495088038Z" level=info msg="NRI interface is disabled by configuration." May 8 23:55:17.495154 containerd[1439]: time="2025-05-08T23:55:17.495102198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 23:55:17.495558 containerd[1439]: time="2025-05-08T23:55:17.495470678Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 23:55:17.495558 containerd[1439]: time="2025-05-08T23:55:17.495546718Z" level=info msg="Connect containerd service" May 8 23:55:17.495699 containerd[1439]: time="2025-05-08T23:55:17.495577878Z" level=info msg="using legacy CRI server" May 8 23:55:17.495699 containerd[1439]: time="2025-05-08T23:55:17.495586198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 23:55:17.495699 containerd[1439]: time="2025-05-08T23:55:17.495674598Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 23:55:17.495794 systemd[1]: issuegen.service: Deactivated successfully. May 8 23:55:17.496378 containerd[1439]: time="2025-05-08T23:55:17.496348438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:55:17.496697 containerd[1439]: time="2025-05-08T23:55:17.496601798Z" level=info msg="Start subscribing containerd event" May 8 23:55:17.496855 containerd[1439]: time="2025-05-08T23:55:17.496753758Z" level=info msg="Start recovering state" May 8 23:55:17.496855 containerd[1439]: time="2025-05-08T23:55:17.496837518Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 23:55:17.496900 containerd[1439]: time="2025-05-08T23:55:17.496878238Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 23:55:17.496986 containerd[1439]: time="2025-05-08T23:55:17.496970438Z" level=info msg="Start event monitor" May 8 23:55:17.497043 containerd[1439]: time="2025-05-08T23:55:17.497031838Z" level=info msg="Start snapshots syncer" May 8 23:55:17.497127 containerd[1439]: time="2025-05-08T23:55:17.497114878Z" level=info msg="Start cni network conf syncer for default" May 8 23:55:17.497304 containerd[1439]: time="2025-05-08T23:55:17.497168878Z" level=info msg="Start streaming server" May 8 23:55:17.497324 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 23:55:17.497450 containerd[1439]: time="2025-05-08T23:55:17.497435478Z" level=info msg="containerd successfully booted in 0.044148s" May 8 23:55:17.498738 systemd[1]: Started containerd.service - containerd container runtime. May 8 23:55:17.505522 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 23:55:17.517913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 23:55:17.526691 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 23:55:17.529278 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 23:55:17.530827 systemd[1]: Reached target getty.target - Login Prompts. May 8 23:55:17.681865 tar[1435]: linux-arm64/README.md May 8 23:55:17.700306 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 23:55:18.582522 systemd-networkd[1374]: eth0: Gained IPv6LL May 8 23:55:18.585024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 23:55:18.586804 systemd[1]: Reached target network-online.target - Network is Online. May 8 23:55:18.599477 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 23:55:18.601504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:18.603488 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 23:55:18.618113 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 23:55:18.618582 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 23:55:18.620687 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 23:55:18.625087 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 23:55:19.122476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:19.123949 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 23:55:19.126173 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:55:19.129340 systemd[1]: Startup finished in 664ms (kernel) + 8.339s (initrd) + 3.670s (userspace) = 12.674s. May 8 23:55:19.538371 kubelet[1522]: E0508 23:55:19.538254 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:55:19.540571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:55:19.540716 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:55:20.285861 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 23:55:20.286952 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:52780.service - OpenSSH per-connection server daemon (10.0.0.1:52780). May 8 23:55:20.339328 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 52780 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:20.341318 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:20.349058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 23:55:20.360533 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 23:55:20.362318 systemd-logind[1422]: New session 1 of user core. May 8 23:55:20.371304 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 23:55:20.373678 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 23:55:20.380641 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 23:55:20.453264 systemd[1539]: Queued start job for default target default.target. May 8 23:55:20.466172 systemd[1539]: Created slice app.slice - User Application Slice. May 8 23:55:20.466202 systemd[1539]: Reached target paths.target - Paths. May 8 23:55:20.466214 systemd[1539]: Reached target timers.target - Timers. May 8 23:55:20.467483 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 23:55:20.477759 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 23:55:20.477829 systemd[1539]: Reached target sockets.target - Sockets. May 8 23:55:20.477842 systemd[1539]: Reached target basic.target - Basic System. May 8 23:55:20.477880 systemd[1539]: Reached target default.target - Main User Target. May 8 23:55:20.477911 systemd[1539]: Startup finished in 91ms. May 8 23:55:20.478217 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 23:55:20.479621 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 23:55:20.548026 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:52792.service - OpenSSH per-connection server daemon (10.0.0.1:52792). May 8 23:55:20.584615 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 52792 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:20.585932 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:20.590817 systemd-logind[1422]: New session 2 of user core. May 8 23:55:20.605456 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 23:55:20.659797 sshd[1550]: pam_unix(sshd:session): session closed for user core May 8 23:55:20.680973 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:52792.service: Deactivated successfully. May 8 23:55:20.685342 systemd[1]: session-2.scope: Deactivated successfully. May 8 23:55:20.686675 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. May 8 23:55:20.687811 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:52804.service - OpenSSH per-connection server daemon (10.0.0.1:52804). May 8 23:55:20.689498 systemd-logind[1422]: Removed session 2. May 8 23:55:20.725161 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 52804 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:20.726598 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:20.730724 systemd-logind[1422]: New session 3 of user core. May 8 23:55:20.741423 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 23:55:20.789516 sshd[1557]: pam_unix(sshd:session): session closed for user core May 8 23:55:20.799690 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:52804.service: Deactivated successfully. May 8 23:55:20.801946 systemd[1]: session-3.scope: Deactivated successfully. May 8 23:55:20.803140 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. May 8 23:55:20.804343 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:52814.service - OpenSSH per-connection server daemon (10.0.0.1:52814). May 8 23:55:20.805171 systemd-logind[1422]: Removed session 3. May 8 23:55:20.840552 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 52814 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:20.841895 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:20.846087 systemd-logind[1422]: New session 4 of user core. May 8 23:55:20.855427 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 23:55:20.910283 sshd[1564]: pam_unix(sshd:session): session closed for user core May 8 23:55:20.929055 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:52814.service: Deactivated successfully. May 8 23:55:20.932092 systemd[1]: session-4.scope: Deactivated successfully. May 8 23:55:20.933981 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. May 8 23:55:20.943681 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:52816.service - OpenSSH per-connection server daemon (10.0.0.1:52816). May 8 23:55:20.944261 systemd-logind[1422]: Removed session 4. May 8 23:55:20.978220 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 52816 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:20.979922 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:20.984294 systemd-logind[1422]: New session 5 of user core. May 8 23:55:20.997460 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 23:55:21.065090 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 23:55:21.065405 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:21.080157 sudo[1574]: pam_unix(sudo:session): session closed for user root May 8 23:55:21.083311 sshd[1571]: pam_unix(sshd:session): session closed for user core May 8 23:55:21.094299 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:52816.service: Deactivated successfully. May 8 23:55:21.096205 systemd[1]: session-5.scope: Deactivated successfully. May 8 23:55:21.097943 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. May 8 23:55:21.108550 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:52830.service - OpenSSH per-connection server daemon (10.0.0.1:52830). May 8 23:55:21.109396 systemd-logind[1422]: Removed session 5. May 8 23:55:21.142143 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 52830 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:21.143994 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:21.148217 systemd-logind[1422]: New session 6 of user core. May 8 23:55:21.158393 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 23:55:21.210373 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 23:55:21.210659 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:21.214203 sudo[1583]: pam_unix(sudo:session): session closed for user root May 8 23:55:21.219272 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 23:55:21.219566 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:21.247514 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 23:55:21.248882 auditctl[1586]: No rules May 8 23:55:21.249875 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:55:21.250089 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 23:55:21.253836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 23:55:21.278146 augenrules[1604]: No rules May 8 23:55:21.279480 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 23:55:21.280671 sudo[1582]: pam_unix(sudo:session): session closed for user root May 8 23:55:21.282291 sshd[1579]: pam_unix(sshd:session): session closed for user core May 8 23:55:21.288646 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:52830.service: Deactivated successfully. May 8 23:55:21.289990 systemd[1]: session-6.scope: Deactivated successfully. May 8 23:55:21.291149 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. May 8 23:55:21.292288 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:52846.service - OpenSSH per-connection server daemon (10.0.0.1:52846). May 8 23:55:21.293030 systemd-logind[1422]: Removed session 6. May 8 23:55:21.331584 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 52846 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:55:21.332125 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:21.336295 systemd-logind[1422]: New session 7 of user core. May 8 23:55:21.350405 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 23:55:21.400989 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 23:55:21.401314 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:55:21.743506 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 23:55:21.743679 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 23:55:22.008425 dockerd[1634]: time="2025-05-08T23:55:22.007185438Z" level=info msg="Starting up" May 8 23:55:22.254181 dockerd[1634]: time="2025-05-08T23:55:22.254113318Z" level=info msg="Loading containers: start." May 8 23:55:22.340370 kernel: Initializing XFRM netlink socket May 8 23:55:22.407439 systemd-networkd[1374]: docker0: Link UP May 8 23:55:22.427684 dockerd[1634]: time="2025-05-08T23:55:22.427622278Z" level=info msg="Loading containers: done." May 8 23:55:22.438725 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3537036265-merged.mount: Deactivated successfully. May 8 23:55:22.440134 dockerd[1634]: time="2025-05-08T23:55:22.440084158Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 23:55:22.440229 dockerd[1634]: time="2025-05-08T23:55:22.440204438Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 23:55:22.440366 dockerd[1634]: time="2025-05-08T23:55:22.440340758Z" level=info msg="Daemon has completed initialization" May 8 23:55:22.468133 dockerd[1634]: time="2025-05-08T23:55:22.468002958Z" level=info msg="API listen on /run/docker.sock" May 8 23:55:22.468292 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 23:55:23.240426 containerd[1439]: time="2025-05-08T23:55:23.240344518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 23:55:23.859967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567979703.mount: Deactivated successfully. May 8 23:55:25.119859 containerd[1439]: time="2025-05-08T23:55:25.119815318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:25.120768 containerd[1439]: time="2025-05-08T23:55:25.120717918Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 8 23:55:25.121606 containerd[1439]: time="2025-05-08T23:55:25.121575438Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:25.124654 containerd[1439]: time="2025-05-08T23:55:25.124590398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:25.126548 containerd[1439]: time="2025-05-08T23:55:25.126181798Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.88579648s" May 8 23:55:25.126548 containerd[1439]: time="2025-05-08T23:55:25.126229398Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 8 23:55:25.127296 containerd[1439]: time="2025-05-08T23:55:25.127225238Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 23:55:26.638806 containerd[1439]: time="2025-05-08T23:55:26.638759038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:26.639778 containerd[1439]: time="2025-05-08T23:55:26.639566278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 8 23:55:26.640480 containerd[1439]: time="2025-05-08T23:55:26.640442438Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:26.643332 containerd[1439]: time="2025-05-08T23:55:26.643296518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:26.644540 containerd[1439]: time="2025-05-08T23:55:26.644504198Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.51720436s" May 8 23:55:26.644570 containerd[1439]: time="2025-05-08T23:55:26.644537718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 8 23:55:26.645097 containerd[1439]: time="2025-05-08T23:55:26.645069638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 23:55:28.266975 containerd[1439]: time="2025-05-08T23:55:28.266915758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:28.320166 containerd[1439]: time="2025-05-08T23:55:28.320100278Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 8 23:55:28.334426 containerd[1439]: time="2025-05-08T23:55:28.334374718Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:28.348878 containerd[1439]: time="2025-05-08T23:55:28.348808838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:28.350180 containerd[1439]: time="2025-05-08T23:55:28.350047238Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.70494696s" May 8 23:55:28.350180 containerd[1439]: time="2025-05-08T23:55:28.350085438Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 8 23:55:28.351046 containerd[1439]: time="2025-05-08T23:55:28.351014158Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 23:55:29.447675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798645729.mount: Deactivated successfully. May 8 23:55:29.791057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 23:55:29.805534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:29.906142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:29.910322 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:55:29.922796 containerd[1439]: time="2025-05-08T23:55:29.922639158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:29.923961 containerd[1439]: time="2025-05-08T23:55:29.923503558Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 8 23:55:29.925050 containerd[1439]: time="2025-05-08T23:55:29.924991438Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:29.926840 containerd[1439]: time="2025-05-08T23:55:29.926807718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:29.927908 containerd[1439]: time="2025-05-08T23:55:29.927876958Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.57682868s" May 8 23:55:29.927968 containerd[1439]: time="2025-05-08T23:55:29.927921838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 8 23:55:29.928547 containerd[1439]: time="2025-05-08T23:55:29.928519438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 23:55:29.944665 kubelet[1863]: E0508 23:55:29.944609 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:55:29.947700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:55:29.947856 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:55:30.518661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389742340.mount: Deactivated successfully. May 8 23:55:31.320488 containerd[1439]: time="2025-05-08T23:55:31.320424758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:31.321042 containerd[1439]: time="2025-05-08T23:55:31.321012038Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 8 23:55:31.321790 containerd[1439]: time="2025-05-08T23:55:31.321761398Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:31.324827 containerd[1439]: time="2025-05-08T23:55:31.324789118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:31.327745 containerd[1439]: time="2025-05-08T23:55:31.327141958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.39858988s" May 8 23:55:31.327745 containerd[1439]: time="2025-05-08T23:55:31.327186038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 8 23:55:31.328264 containerd[1439]: time="2025-05-08T23:55:31.328132758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 23:55:31.779331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465261588.mount: Deactivated successfully. May 8 23:55:31.782839 containerd[1439]: time="2025-05-08T23:55:31.782659918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:31.783472 containerd[1439]: time="2025-05-08T23:55:31.783396158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 8 23:55:31.784261 containerd[1439]: time="2025-05-08T23:55:31.783995278Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:31.787325 containerd[1439]: time="2025-05-08T23:55:31.786703838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:31.787417 containerd[1439]: time="2025-05-08T23:55:31.787325278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.0926ms" May 8 23:55:31.787417 containerd[1439]: time="2025-05-08T23:55:31.787359638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 23:55:31.787972 containerd[1439]: time="2025-05-08T23:55:31.787799198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 23:55:32.318398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340175546.mount: Deactivated successfully. May 8 23:55:34.923852 containerd[1439]: time="2025-05-08T23:55:34.923803758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:34.924952 containerd[1439]: time="2025-05-08T23:55:34.924889598Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 8 23:55:34.925595 containerd[1439]: time="2025-05-08T23:55:34.925563198Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:34.929062 containerd[1439]: time="2025-05-08T23:55:34.929000238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:55:34.930577 containerd[1439]: time="2025-05-08T23:55:34.930423358Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.1425882s" May 8 23:55:34.930577 containerd[1439]: time="2025-05-08T23:55:34.930474038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 8 23:55:40.198181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 23:55:40.207858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:40.308778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:40.312446 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:55:40.345895 kubelet[2011]: E0508 23:55:40.345824 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:55:40.348430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:55:40.348573 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:55:41.418950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:41.435514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:41.456936 systemd[1]: Reloading requested from client PID 2026 ('systemctl') (unit session-7.scope)... May 8 23:55:41.456953 systemd[1]: Reloading... May 8 23:55:41.530295 zram_generator::config[2065]: No configuration found. May 8 23:55:41.700339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:41.756038 systemd[1]: Reloading finished in 298 ms. May 8 23:55:41.800986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:41.804676 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:55:41.804921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:41.806556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:41.911154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:41.915440 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:55:41.951420 kubelet[2112]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:55:41.951420 kubelet[2112]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 23:55:41.951420 kubelet[2112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:55:41.951420 kubelet[2112]: I0508 23:55:41.951385 2112 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:55:43.240551 kubelet[2112]: I0508 23:55:43.240507 2112 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 23:55:43.240551 kubelet[2112]: I0508 23:55:43.240539 2112 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:55:43.242590 kubelet[2112]: I0508 23:55:43.242565 2112 server.go:954] "Client rotation is on, will bootstrap in background" May 8 23:55:43.279726 kubelet[2112]: E0508 23:55:43.279675 2112 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:43.282097 kubelet[2112]: I0508 23:55:43.282067 2112 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:55:43.289245 kubelet[2112]: E0508 23:55:43.289187 2112 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 23:55:43.289245 kubelet[2112]: I0508 23:55:43.289223 2112 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 23:55:43.294739 kubelet[2112]: I0508 23:55:43.294708 2112 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:55:43.296851 kubelet[2112]: I0508 23:55:43.296793 2112 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:55:43.297034 kubelet[2112]: I0508 23:55:43.296849 2112 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 23:55:43.297112 kubelet[2112]: I0508 23:55:43.297097 2112 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:55:43.297112 kubelet[2112]: I0508 23:55:43.297106 2112 container_manager_linux.go:304] "Creating device plugin manager" May 8 23:55:43.297344 kubelet[2112]: I0508 23:55:43.297322 2112 state_mem.go:36] "Initialized new in-memory state store" May 8 23:55:43.299902 kubelet[2112]: I0508 23:55:43.299875 2112 kubelet.go:446] "Attempting to sync node with API server" May 8 23:55:43.299949 kubelet[2112]: I0508 23:55:43.299904 2112 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:55:43.299949 kubelet[2112]: I0508 23:55:43.299926 2112 kubelet.go:352] "Adding apiserver pod source" May 8 23:55:43.299949 kubelet[2112]: I0508 23:55:43.299936 2112 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:55:43.303752 kubelet[2112]: W0508 23:55:43.303655 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:43.303752 kubelet[2112]: E0508 23:55:43.303718 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:43.303752 kubelet[2112]: I0508 23:55:43.303686 2112 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 23:55:43.303974 kubelet[2112]: W0508 23:55:43.303930 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:43.304038 kubelet[2112]: E0508 23:55:43.303979 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:43.304355 kubelet[2112]: I0508 23:55:43.304340 2112 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:55:43.304512 kubelet[2112]: W0508 23:55:43.304468 2112 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 23:55:43.305431 kubelet[2112]: I0508 23:55:43.305413 2112 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 23:55:43.305480 kubelet[2112]: I0508 23:55:43.305449 2112 server.go:1287] "Started kubelet" May 8 23:55:43.306411 kubelet[2112]: I0508 23:55:43.305536 2112 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:55:43.307905 kubelet[2112]: I0508 23:55:43.307101 2112 server.go:490] "Adding debug handlers to kubelet server" May 8 23:55:43.309277 kubelet[2112]: I0508 23:55:43.309154 2112 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.309586 2112 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.309712 2112 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.309986 2112 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 23:55:43.311641 kubelet[2112]: E0508 23:55:43.310165 2112 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.310189 2112 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.310365 2112 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.310427 2112 reconciler.go:26] "Reconciler: start to sync state" May 8 23:55:43.311641 kubelet[2112]: W0508 23:55:43.310759 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:43.311641 kubelet[2112]: E0508 23:55:43.310798 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.311051 2112 factory.go:221] Registration of the systemd container factory successfully May 8 23:55:43.311641 kubelet[2112]: I0508 23:55:43.311121 2112 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:55:43.312452 kubelet[2112]: E0508 23:55:43.312082 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" May 8 23:55:43.312452 kubelet[2112]: I0508 23:55:43.312114 2112 factory.go:221] Registration of the containerd container factory successfully May 8 23:55:43.313154 kubelet[2112]: E0508 23:55:43.313123 2112 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:55:43.313704 kubelet[2112]: E0508 23:55:43.313254 2112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db28ba3a6ad86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 23:55:43.305428358 +0000 UTC m=+1.386794721,LastTimestamp:2025-05-08 23:55:43.305428358 +0000 UTC m=+1.386794721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 23:55:43.323658 kubelet[2112]: I0508 23:55:43.323630 2112 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 23:55:43.323658 kubelet[2112]: I0508 23:55:43.323667 2112 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 23:55:43.323773 kubelet[2112]: I0508 23:55:43.323687 2112 state_mem.go:36] "Initialized new in-memory state store" May 8 23:55:43.400687 kubelet[2112]: I0508 23:55:43.400648 2112 policy_none.go:49] "None policy: Start" May 8 23:55:43.400687 kubelet[2112]: I0508 23:55:43.400680 2112 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 23:55:43.400687 kubelet[2112]: I0508 23:55:43.400693 2112 state_mem.go:35] "Initializing new in-memory state store" May 8 23:55:43.405320 kubelet[2112]: I0508 23:55:43.405286 2112 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:55:43.406452 kubelet[2112]: I0508 23:55:43.406422 2112 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:55:43.406452 kubelet[2112]: I0508 23:55:43.406448 2112 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 23:55:43.406540 kubelet[2112]: I0508 23:55:43.406471 2112 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 23:55:43.406540 kubelet[2112]: I0508 23:55:43.406478 2112 kubelet.go:2388] "Starting kubelet main sync loop" May 8 23:55:43.406540 kubelet[2112]: E0508 23:55:43.406521 2112 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:55:43.407524 kubelet[2112]: W0508 23:55:43.407443 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:43.407524 kubelet[2112]: E0508 23:55:43.407497 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:43.408473 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 23:55:43.410610 kubelet[2112]: E0508 23:55:43.410580 2112 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:55:43.425704 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 23:55:43.428540 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 23:55:43.441170 kubelet[2112]: I0508 23:55:43.441122 2112 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:55:43.441688 kubelet[2112]: I0508 23:55:43.441360 2112 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 23:55:43.441688 kubelet[2112]: I0508 23:55:43.441381 2112 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:55:43.441770 kubelet[2112]: I0508 23:55:43.441750 2112 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:55:43.443042 kubelet[2112]: E0508 23:55:43.443018 2112 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 23:55:43.443199 kubelet[2112]: E0508 23:55:43.443181 2112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 23:55:43.511176 kubelet[2112]: I0508 23:55:43.511062 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:43.511545 kubelet[2112]: I0508 23:55:43.511335 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:43.511545 kubelet[2112]: I0508 23:55:43.511367 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ccf06f2069db378087ce8812b69922c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ccf06f2069db378087ce8812b69922c\") " pod="kube-system/kube-apiserver-localhost" May 8 23:55:43.511545 kubelet[2112]: I0508 23:55:43.511385 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ccf06f2069db378087ce8812b69922c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ccf06f2069db378087ce8812b69922c\") " pod="kube-system/kube-apiserver-localhost" May 8 23:55:43.511545 kubelet[2112]: I0508 23:55:43.511412 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:43.511545 kubelet[2112]: I0508 23:55:43.511430 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:43.511723 kubelet[2112]: I0508 23:55:43.511446 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 23:55:43.511723 kubelet[2112]: I0508 23:55:43.511461 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ccf06f2069db378087ce8812b69922c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ccf06f2069db378087ce8812b69922c\") " pod="kube-system/kube-apiserver-localhost" May 8 23:55:43.511723 kubelet[2112]: I0508 23:55:43.511476 2112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:43.513177 kubelet[2112]: E0508 23:55:43.512916 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" May 8 23:55:43.517394 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 23:55:43.541997 kubelet[2112]: E0508 23:55:43.541951 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:43.542882 kubelet[2112]: I0508 23:55:43.542673 2112 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 23:55:43.543085 kubelet[2112]: E0508 23:55:43.543041 2112 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 23:55:43.544738 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 23:55:43.561550 kubelet[2112]: E0508 23:55:43.561426 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:43.563928 systemd[1]: Created slice kubepods-burstable-pod0ccf06f2069db378087ce8812b69922c.slice - libcontainer container kubepods-burstable-pod0ccf06f2069db378087ce8812b69922c.slice. May 8 23:55:43.565694 kubelet[2112]: E0508 23:55:43.565660 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:43.744389 kubelet[2112]: I0508 23:55:43.744356 2112 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 23:55:43.744698 kubelet[2112]: E0508 23:55:43.744675 2112 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 23:55:43.842790 kubelet[2112]: E0508 23:55:43.842699 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:43.843394 containerd[1439]: time="2025-05-08T23:55:43.843335718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 23:55:43.863229 kubelet[2112]: E0508 23:55:43.862660 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:43.863412 containerd[1439]: time="2025-05-08T23:55:43.863357758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 23:55:43.866963 kubelet[2112]: E0508 23:55:43.866865 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:43.867275 containerd[1439]: time="2025-05-08T23:55:43.867244998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ccf06f2069db378087ce8812b69922c,Namespace:kube-system,Attempt:0,}" May 8 23:55:43.914445 kubelet[2112]: E0508 23:55:43.914396 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" May 8 23:55:44.107762 kubelet[2112]: W0508 23:55:44.107583 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:44.107762 kubelet[2112]: E0508 23:55:44.107651 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:44.146351 kubelet[2112]: I0508 23:55:44.146321 2112 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 23:55:44.146689 kubelet[2112]: E0508 23:55:44.146658 2112 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 23:55:44.398515 kubelet[2112]: W0508 23:55:44.398365 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:44.398515 kubelet[2112]: E0508 23:55:44.398439 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:44.453208 kubelet[2112]: W0508 23:55:44.453150 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:44.453440 kubelet[2112]: E0508 23:55:44.453414 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:44.453782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433975856.mount: Deactivated successfully. May 8 23:55:44.457127 containerd[1439]: time="2025-05-08T23:55:44.457084278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:55:44.458274 containerd[1439]: time="2025-05-08T23:55:44.458247918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 23:55:44.460055 containerd[1439]: time="2025-05-08T23:55:44.460016918Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:55:44.461460 containerd[1439]: time="2025-05-08T23:55:44.461424758Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:55:44.461644 containerd[1439]: time="2025-05-08T23:55:44.461608278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:55:44.462476 containerd[1439]: time="2025-05-08T23:55:44.462444078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:55:44.463353 containerd[1439]: time="2025-05-08T23:55:44.463244518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:55:44.464643 containerd[1439]: time="2025-05-08T23:55:44.464593158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:55:44.466512 containerd[1439]: time="2025-05-08T23:55:44.466466598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.02432ms" May 8 23:55:44.468327 containerd[1439]: time="2025-05-08T23:55:44.468290358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 624.8616ms" May 8 23:55:44.470327 containerd[1439]: time="2025-05-08T23:55:44.470289598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 602.97904ms" May 8 23:55:44.643869 containerd[1439]: time="2025-05-08T23:55:44.643769838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:44.643869 containerd[1439]: time="2025-05-08T23:55:44.643830678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:44.643869 containerd[1439]: time="2025-05-08T23:55:44.643847278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:44.644048 containerd[1439]: time="2025-05-08T23:55:44.643924158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:44.644704 containerd[1439]: time="2025-05-08T23:55:44.644559038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:44.644704 containerd[1439]: time="2025-05-08T23:55:44.644655238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:44.644704 containerd[1439]: time="2025-05-08T23:55:44.644670238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:44.645007 containerd[1439]: time="2025-05-08T23:55:44.644772358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:44.647366 containerd[1439]: time="2025-05-08T23:55:44.647207918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:44.647366 containerd[1439]: time="2025-05-08T23:55:44.647313158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:44.647366 containerd[1439]: time="2025-05-08T23:55:44.647324918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:44.647650 containerd[1439]: time="2025-05-08T23:55:44.647426598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:44.671504 systemd[1]: Started cri-containerd-2a8d2eddee12d79491d79698d656f8708b616c9b2fcad5787102b52378cdd73a.scope - libcontainer container 2a8d2eddee12d79491d79698d656f8708b616c9b2fcad5787102b52378cdd73a. May 8 23:55:44.672879 systemd[1]: Started cri-containerd-3ef88b69da3385c0fa2595ea8c762e7686a3d78911f41eba5eb87939545a7b7d.scope - libcontainer container 3ef88b69da3385c0fa2595ea8c762e7686a3d78911f41eba5eb87939545a7b7d. May 8 23:55:44.674639 systemd[1]: Started cri-containerd-8211da1b2a3256396ba185bbf68f37fba7a7cfcebd891961a6ce15fc8ea56177.scope - libcontainer container 8211da1b2a3256396ba185bbf68f37fba7a7cfcebd891961a6ce15fc8ea56177. May 8 23:55:44.706063 kubelet[2112]: W0508 23:55:44.706015 2112 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 8 23:55:44.706063 kubelet[2112]: E0508 23:55:44.706056 2112 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 8 23:55:44.708194 containerd[1439]: time="2025-05-08T23:55:44.708127398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ccf06f2069db378087ce8812b69922c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a8d2eddee12d79491d79698d656f8708b616c9b2fcad5787102b52378cdd73a\"" May 8 23:55:44.709616 kubelet[2112]: E0508 23:55:44.709594 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:44.711886 containerd[1439]: time="2025-05-08T23:55:44.711775038Z" level=info msg="CreateContainer within sandbox \"2a8d2eddee12d79491d79698d656f8708b616c9b2fcad5787102b52378cdd73a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 23:55:44.714736 containerd[1439]: time="2025-05-08T23:55:44.714707158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ef88b69da3385c0fa2595ea8c762e7686a3d78911f41eba5eb87939545a7b7d\"" May 8 23:55:44.714894 kubelet[2112]: E0508 23:55:44.714763 2112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" May 8 23:55:44.715165 containerd[1439]: time="2025-05-08T23:55:44.715095838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"8211da1b2a3256396ba185bbf68f37fba7a7cfcebd891961a6ce15fc8ea56177\"" May 8 23:55:44.715951 kubelet[2112]: E0508 23:55:44.715931 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:44.716025 kubelet[2112]: E0508 23:55:44.716009 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:44.717525 containerd[1439]: time="2025-05-08T23:55:44.717424998Z" level=info msg="CreateContainer within sandbox \"3ef88b69da3385c0fa2595ea8c762e7686a3d78911f41eba5eb87939545a7b7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 23:55:44.718565 containerd[1439]: time="2025-05-08T23:55:44.718477558Z" level=info msg="CreateContainer within sandbox \"8211da1b2a3256396ba185bbf68f37fba7a7cfcebd891961a6ce15fc8ea56177\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 23:55:44.733759 containerd[1439]: time="2025-05-08T23:55:44.733704718Z" level=info msg="CreateContainer within sandbox \"2a8d2eddee12d79491d79698d656f8708b616c9b2fcad5787102b52378cdd73a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0a6df2902a19c4736a8c09d85f022b2b883da37c013ec5ae7493285d4f064c5\"" May 8 23:55:44.734764 containerd[1439]: time="2025-05-08T23:55:44.734694198Z" level=info msg="StartContainer for \"f0a6df2902a19c4736a8c09d85f022b2b883da37c013ec5ae7493285d4f064c5\"" May 8 23:55:44.737861 containerd[1439]: time="2025-05-08T23:55:44.737816318Z" level=info msg="CreateContainer within sandbox \"8211da1b2a3256396ba185bbf68f37fba7a7cfcebd891961a6ce15fc8ea56177\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7701f7913fecc9712926d1d2f6bbf80476eca04827248b44f067045881713a37\"" May 8 23:55:44.738469 containerd[1439]: time="2025-05-08T23:55:44.738438998Z" level=info msg="StartContainer for \"7701f7913fecc9712926d1d2f6bbf80476eca04827248b44f067045881713a37\"" May 8 23:55:44.740734 containerd[1439]: time="2025-05-08T23:55:44.740663158Z" level=info msg="CreateContainer within sandbox \"3ef88b69da3385c0fa2595ea8c762e7686a3d78911f41eba5eb87939545a7b7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ebd34e1048d56b2bbe4e601bd69ef63727a8b1aab7f454e356f646f907e1a06\"" May 8 23:55:44.741466 containerd[1439]: time="2025-05-08T23:55:44.741059398Z" level=info msg="StartContainer for \"5ebd34e1048d56b2bbe4e601bd69ef63727a8b1aab7f454e356f646f907e1a06\"" May 8 23:55:44.764427 systemd[1]: Started cri-containerd-f0a6df2902a19c4736a8c09d85f022b2b883da37c013ec5ae7493285d4f064c5.scope - libcontainer container f0a6df2902a19c4736a8c09d85f022b2b883da37c013ec5ae7493285d4f064c5. May 8 23:55:44.767864 systemd[1]: Started cri-containerd-5ebd34e1048d56b2bbe4e601bd69ef63727a8b1aab7f454e356f646f907e1a06.scope - libcontainer container 5ebd34e1048d56b2bbe4e601bd69ef63727a8b1aab7f454e356f646f907e1a06. May 8 23:55:44.768804 systemd[1]: Started cri-containerd-7701f7913fecc9712926d1d2f6bbf80476eca04827248b44f067045881713a37.scope - libcontainer container 7701f7913fecc9712926d1d2f6bbf80476eca04827248b44f067045881713a37. May 8 23:55:44.799764 containerd[1439]: time="2025-05-08T23:55:44.799594998Z" level=info msg="StartContainer for \"f0a6df2902a19c4736a8c09d85f022b2b883da37c013ec5ae7493285d4f064c5\" returns successfully" May 8 23:55:44.851740 containerd[1439]: time="2025-05-08T23:55:44.847594638Z" level=info msg="StartContainer for \"7701f7913fecc9712926d1d2f6bbf80476eca04827248b44f067045881713a37\" returns successfully" May 8 23:55:44.851740 containerd[1439]: time="2025-05-08T23:55:44.847690958Z" level=info msg="StartContainer for \"5ebd34e1048d56b2bbe4e601bd69ef63727a8b1aab7f454e356f646f907e1a06\" returns successfully" May 8 23:55:44.952120 kubelet[2112]: I0508 23:55:44.948559 2112 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 23:55:44.952120 kubelet[2112]: E0508 23:55:44.948910 2112 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 8 23:55:45.416830 kubelet[2112]: E0508 23:55:45.416666 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:45.416830 kubelet[2112]: E0508 23:55:45.416794 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:45.419922 kubelet[2112]: E0508 23:55:45.419783 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:45.419922 kubelet[2112]: E0508 23:55:45.419872 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:45.422674 kubelet[2112]: E0508 23:55:45.422655 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:45.422881 kubelet[2112]: E0508 23:55:45.422842 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:46.422368 kubelet[2112]: E0508 23:55:46.422332 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:46.422707 kubelet[2112]: E0508 23:55:46.422453 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:46.423529 kubelet[2112]: E0508 23:55:46.423502 2112 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 23:55:46.423624 kubelet[2112]: E0508 23:55:46.423610 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:46.550659 kubelet[2112]: I0508 23:55:46.550501 2112 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 23:55:46.774537 kubelet[2112]: E0508 23:55:46.773382 2112 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 23:55:46.863521 kubelet[2112]: I0508 23:55:46.863409 2112 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 23:55:46.863521 kubelet[2112]: E0508 23:55:46.863447 2112 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 23:55:46.866963 kubelet[2112]: E0508 23:55:46.866930 2112 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:55:46.967706 kubelet[2112]: E0508 23:55:46.967659 2112 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:55:47.012768 kubelet[2112]: I0508 23:55:47.012723 2112 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 23:55:47.021687 kubelet[2112]: E0508 23:55:47.021661 2112 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 23:55:47.021942 kubelet[2112]: I0508 23:55:47.021787 2112 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 23:55:47.023513 kubelet[2112]: E0508 23:55:47.023468 2112 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 8 23:55:47.023513 kubelet[2112]: I0508 23:55:47.023496 2112 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 23:55:47.025291 kubelet[2112]: E0508 23:55:47.025199 2112 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 8 23:55:47.304701 kubelet[2112]: I0508 23:55:47.304577 2112 apiserver.go:52] "Watching apiserver" May 8 23:55:47.311157 kubelet[2112]: I0508 23:55:47.311118 2112 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:55:48.206801 kubelet[2112]: I0508 23:55:48.206755 2112 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 23:55:48.211984 kubelet[2112]: E0508 23:55:48.211954 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:48.423469 kubelet[2112]: E0508 23:55:48.423421 2112 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:48.960396 systemd[1]: Reloading requested from client PID 2389 ('systemctl') (unit session-7.scope)... May 8 23:55:48.960412 systemd[1]: Reloading... May 8 23:55:49.019266 zram_generator::config[2431]: No configuration found. May 8 23:55:49.102489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:55:49.167087 systemd[1]: Reloading finished in 206 ms. May 8 23:55:49.196416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:49.208291 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:55:49.208547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:49.208601 systemd[1]: kubelet.service: Consumed 1.775s CPU time, 128.2M memory peak, 0B memory swap peak. May 8 23:55:49.218664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:55:49.314533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:55:49.318065 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:55:49.356307 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:55:49.356922 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 23:55:49.356922 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:55:49.356922 kubelet[2470]: I0508 23:55:49.356611 2470 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:55:49.363574 kubelet[2470]: I0508 23:55:49.363536 2470 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 23:55:49.363574 kubelet[2470]: I0508 23:55:49.363566 2470 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:55:49.363814 kubelet[2470]: I0508 23:55:49.363791 2470 server.go:954] "Client rotation is on, will bootstrap in background" May 8 23:55:49.364996 kubelet[2470]: I0508 23:55:49.364974 2470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 23:55:49.367206 kubelet[2470]: I0508 23:55:49.367175 2470 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:55:49.371512 kubelet[2470]: E0508 23:55:49.371476 2470 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 23:55:49.371512 kubelet[2470]: I0508 23:55:49.371505 2470 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 23:55:49.373746 kubelet[2470]: I0508 23:55:49.373724 2470 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:55:49.373957 kubelet[2470]: I0508 23:55:49.373913 2470 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:55:49.374096 kubelet[2470]: I0508 23:55:49.373938 2470 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 23:55:49.374174 kubelet[2470]: I0508 23:55:49.374100 2470 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:55:49.374174 kubelet[2470]: I0508 23:55:49.374109 2470 container_manager_linux.go:304] "Creating device plugin manager" May 8 23:55:49.374174 kubelet[2470]: I0508 23:55:49.374152 2470 state_mem.go:36] "Initialized new in-memory state store" May 8 23:55:49.374318 kubelet[2470]: I0508 23:55:49.374301 2470 kubelet.go:446] "Attempting to sync node with API server" May 8 23:55:49.374318 kubelet[2470]: I0508 23:55:49.374316 2470 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:55:49.374375 kubelet[2470]: I0508 23:55:49.374331 2470 kubelet.go:352] "Adding apiserver pod source" May 8 23:55:49.374375 kubelet[2470]: I0508 23:55:49.374340 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:55:49.375353 kubelet[2470]: I0508 23:55:49.375329 2470 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 23:55:49.375844 kubelet[2470]: I0508 23:55:49.375827 2470 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:55:49.376241 kubelet[2470]: I0508 23:55:49.376211 2470 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 23:55:49.376646 kubelet[2470]: I0508 23:55:49.376266 2470 server.go:1287] "Started kubelet" May 8 23:55:49.376646 kubelet[2470]: I0508 23:55:49.376321 2470 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:55:49.376646 kubelet[2470]: I0508 23:55:49.376335 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:55:49.376646 kubelet[2470]: I0508 23:55:49.376594 2470 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:55:49.377164 kubelet[2470]: I0508 23:55:49.377140 2470 server.go:490] "Adding debug handlers to kubelet server" May 8 23:55:49.378825 kubelet[2470]: I0508 23:55:49.378791 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:55:49.381811 kubelet[2470]: I0508 23:55:49.378987 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 23:55:49.381811 kubelet[2470]: E0508 23:55:49.380208 2470 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:55:49.381811 kubelet[2470]: I0508 23:55:49.380252 2470 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 23:55:49.381811 kubelet[2470]: I0508 23:55:49.380414 2470 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:55:49.381811 kubelet[2470]: I0508 23:55:49.380514 2470 reconciler.go:26] "Reconciler: start to sync state" May 8 23:55:49.388525 kubelet[2470]: E0508 23:55:49.388491 2470 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:55:49.401564 kubelet[2470]: I0508 23:55:49.401536 2470 factory.go:221] Registration of the containerd container factory successfully May 8 23:55:49.401564 kubelet[2470]: I0508 23:55:49.401559 2470 factory.go:221] Registration of the systemd container factory successfully May 8 23:55:49.401698 kubelet[2470]: I0508 23:55:49.401632 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:55:49.406281 kubelet[2470]: I0508 23:55:49.406231 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:55:49.407125 kubelet[2470]: I0508 23:55:49.407101 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:55:49.407125 kubelet[2470]: I0508 23:55:49.407120 2470 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 23:55:49.407184 kubelet[2470]: I0508 23:55:49.407137 2470 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 23:55:49.407184 kubelet[2470]: I0508 23:55:49.407144 2470 kubelet.go:2388] "Starting kubelet main sync loop" May 8 23:55:49.407230 kubelet[2470]: E0508 23:55:49.407182 2470 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:55:49.441208 kubelet[2470]: I0508 23:55:49.441182 2470 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 23:55:49.441208 kubelet[2470]: I0508 23:55:49.441201 2470 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 23:55:49.441393 kubelet[2470]: I0508 23:55:49.441321 2470 state_mem.go:36] "Initialized new in-memory state store" May 8 23:55:49.441568 kubelet[2470]: I0508 23:55:49.441553 2470 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 23:55:49.441599 kubelet[2470]: I0508 23:55:49.441570 2470 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 23:55:49.441599 kubelet[2470]: I0508 23:55:49.441588 2470 policy_none.go:49] "None policy: Start" May 8 23:55:49.441599 kubelet[2470]: I0508 23:55:49.441596 2470 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 23:55:49.441669 kubelet[2470]: I0508 23:55:49.441605 2470 state_mem.go:35] "Initializing new in-memory state store" May 8 23:55:49.441705 kubelet[2470]: I0508 23:55:49.441692 2470 state_mem.go:75] "Updated machine memory state" May 8 23:55:49.445390 kubelet[2470]: I0508 23:55:49.445355 2470 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:55:49.445986 kubelet[2470]: I0508 23:55:49.445634 2470 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 23:55:49.445986 kubelet[2470]: I0508 23:55:49.445656 2470 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:55:49.445986 kubelet[2470]: I0508 23:55:49.445914 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:55:49.446712 kubelet[2470]: E0508 23:55:49.446676 2470 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 23:55:49.508464 kubelet[2470]: I0508 23:55:49.508304 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 23:55:49.508464 kubelet[2470]: I0508 23:55:49.508304 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 23:55:49.508600 kubelet[2470]: I0508 23:55:49.508304 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 23:55:49.514938 kubelet[2470]: E0508 23:55:49.514901 2470 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 23:55:49.550872 kubelet[2470]: I0508 23:55:49.550830 2470 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 23:55:49.558216 kubelet[2470]: I0508 23:55:49.558177 2470 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 23:55:49.558407 kubelet[2470]: I0508 23:55:49.558287 2470 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 23:55:49.581874 kubelet[2470]: I0508 23:55:49.581815 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ccf06f2069db378087ce8812b69922c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ccf06f2069db378087ce8812b69922c\") " pod="kube-system/kube-apiserver-localhost" May 8 23:55:49.581874 kubelet[2470]: I0508 23:55:49.581858 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ccf06f2069db378087ce8812b69922c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ccf06f2069db378087ce8812b69922c\") " pod="kube-system/kube-apiserver-localhost" May 8 23:55:49.581874 kubelet[2470]: I0508 23:55:49.581880 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:49.582115 kubelet[2470]: I0508 23:55:49.581901 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:49.582115 kubelet[2470]: I0508 23:55:49.581931 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:49.582115 kubelet[2470]: I0508 23:55:49.581959 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 23:55:49.582115 kubelet[2470]: I0508 23:55:49.581985 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ccf06f2069db378087ce8812b69922c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ccf06f2069db378087ce8812b69922c\") " pod="kube-system/kube-apiserver-localhost" May 8 23:55:49.582115 kubelet[2470]: I0508 23:55:49.582017 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:49.582321 kubelet[2470]: I0508 23:55:49.582046 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:55:49.815302 kubelet[2470]: E0508 23:55:49.815107 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:49.815302 kubelet[2470]: E0508 23:55:49.815168 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:49.815302 kubelet[2470]: E0508 23:55:49.815116 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:50.003505 sudo[2508]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 23:55:50.003782 sudo[2508]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 23:55:50.374937 kubelet[2470]: I0508 23:55:50.374876 2470 apiserver.go:52] "Watching apiserver" May 8 23:55:50.381033 kubelet[2470]: I0508 23:55:50.381007 2470 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:55:50.419227 kubelet[2470]: I0508 23:55:50.418923 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 23:55:50.419227 kubelet[2470]: I0508 23:55:50.419078 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 23:55:50.421408 kubelet[2470]: E0508 23:55:50.421377 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:50.425821 kubelet[2470]: E0508 23:55:50.425746 2470 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 23:55:50.425909 kubelet[2470]: E0508 23:55:50.425895 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:50.426354 kubelet[2470]: E0508 23:55:50.426329 2470 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 23:55:50.426530 kubelet[2470]: E0508 23:55:50.426452 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:50.439625 sudo[2508]: pam_unix(sudo:session): session closed for user root May 8 23:55:50.447749 kubelet[2470]: I0508 23:55:50.447274 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.447183215 podStartE2EDuration="2.447183215s" podCreationTimestamp="2025-05-08 23:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:55:50.439389134 +0000 UTC m=+1.118325297" watchObservedRunningTime="2025-05-08 23:55:50.447183215 +0000 UTC m=+1.126119378" May 8 23:55:50.447749 kubelet[2470]: I0508 23:55:50.447416 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.447410856 podStartE2EDuration="1.447410856s" podCreationTimestamp="2025-05-08 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:55:50.447029014 +0000 UTC m=+1.125965217" watchObservedRunningTime="2025-05-08 23:55:50.447410856 +0000 UTC m=+1.126347019" May 8 23:55:50.454041 kubelet[2470]: I0508 23:55:50.453955 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.453936531 podStartE2EDuration="1.453936531s" podCreationTimestamp="2025-05-08 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:55:50.453867731 +0000 UTC m=+1.132803894" watchObservedRunningTime="2025-05-08 23:55:50.453936531 +0000 UTC m=+1.132872694" May 8 23:55:51.420276 kubelet[2470]: E0508 23:55:51.420217 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:51.421215 kubelet[2470]: E0508 23:55:51.421188 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:52.275039 sudo[1615]: pam_unix(sudo:session): session closed for user root May 8 23:55:52.276543 sshd[1612]: pam_unix(sshd:session): session closed for user core May 8 23:55:52.279763 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:52846.service: Deactivated successfully. May 8 23:55:52.281595 systemd[1]: session-7.scope: Deactivated successfully. May 8 23:55:52.281755 systemd[1]: session-7.scope: Consumed 8.922s CPU time, 150.9M memory peak, 0B memory swap peak. May 8 23:55:52.282179 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. May 8 23:55:52.283877 systemd-logind[1422]: Removed session 7. May 8 23:55:52.422336 kubelet[2470]: E0508 23:55:52.422308 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:53.423757 kubelet[2470]: E0508 23:55:53.423721 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:53.752403 kubelet[2470]: E0508 23:55:53.752302 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:54.776306 kubelet[2470]: I0508 23:55:54.776274 2470 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 23:55:54.776794 kubelet[2470]: I0508 23:55:54.776740 2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 23:55:54.776834 containerd[1439]: time="2025-05-08T23:55:54.776570673Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 23:55:55.452263 systemd[1]: Created slice kubepods-besteffort-podab0db7c3_5af1_4f31_be93_ed9e7967bdb7.slice - libcontainer container kubepods-besteffort-podab0db7c3_5af1_4f31_be93_ed9e7967bdb7.slice. May 8 23:55:55.475861 systemd[1]: Created slice kubepods-burstable-podf110fe08_0872_400a_b06d_eb3d16cf2383.slice - libcontainer container kubepods-burstable-podf110fe08_0872_400a_b06d_eb3d16cf2383.slice. May 8 23:55:55.525058 kubelet[2470]: I0508 23:55:55.525009 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-lib-modules\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525058 kubelet[2470]: I0508 23:55:55.525057 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkg48\" (UniqueName: \"kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-kube-api-access-hkg48\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525407 kubelet[2470]: I0508 23:55:55.525077 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab0db7c3-5af1-4f31-be93-ed9e7967bdb7-lib-modules\") pod \"kube-proxy-64qpr\" (UID: \"ab0db7c3-5af1-4f31-be93-ed9e7967bdb7\") " pod="kube-system/kube-proxy-64qpr" May 8 23:55:55.525407 kubelet[2470]: I0508 23:55:55.525091 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab0db7c3-5af1-4f31-be93-ed9e7967bdb7-kube-proxy\") pod \"kube-proxy-64qpr\" (UID: \"ab0db7c3-5af1-4f31-be93-ed9e7967bdb7\") " pod="kube-system/kube-proxy-64qpr" May 8 23:55:55.525407 kubelet[2470]: I0508 23:55:55.525106 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-bpf-maps\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525407 kubelet[2470]: I0508 23:55:55.525120 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-run\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525407 kubelet[2470]: I0508 23:55:55.525135 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9m6\" (UniqueName: \"kubernetes.io/projected/ab0db7c3-5af1-4f31-be93-ed9e7967bdb7-kube-api-access-4s9m6\") pod \"kube-proxy-64qpr\" (UID: \"ab0db7c3-5af1-4f31-be93-ed9e7967bdb7\") " pod="kube-system/kube-proxy-64qpr" May 8 23:55:55.525407 kubelet[2470]: I0508 23:55:55.525150 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cni-path\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525544 kubelet[2470]: I0508 23:55:55.525167 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-config-path\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525544 kubelet[2470]: I0508 23:55:55.525183 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-kernel\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525544 kubelet[2470]: I0508 23:55:55.525198 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f110fe08-0872-400a-b06d-eb3d16cf2383-clustermesh-secrets\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525544 kubelet[2470]: I0508 23:55:55.525252 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-net\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525544 kubelet[2470]: I0508 23:55:55.525293 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-hostproc\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525654 kubelet[2470]: I0508 23:55:55.525332 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-etc-cni-netd\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525654 kubelet[2470]: I0508 23:55:55.525368 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab0db7c3-5af1-4f31-be93-ed9e7967bdb7-xtables-lock\") pod \"kube-proxy-64qpr\" (UID: \"ab0db7c3-5af1-4f31-be93-ed9e7967bdb7\") " pod="kube-system/kube-proxy-64qpr" May 8 23:55:55.525654 kubelet[2470]: I0508 23:55:55.525403 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-hubble-tls\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525654 kubelet[2470]: I0508 23:55:55.525424 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-cgroup\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.525654 kubelet[2470]: I0508 23:55:55.525441 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-xtables-lock\") pod \"cilium-jns2g\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " pod="kube-system/cilium-jns2g" May 8 23:55:55.638541 kubelet[2470]: E0508 23:55:55.638498 2470 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 23:55:55.638541 kubelet[2470]: E0508 23:55:55.638530 2470 projected.go:194] Error preparing data for projected volume kube-api-access-4s9m6 for pod kube-system/kube-proxy-64qpr: configmap "kube-root-ca.crt" not found May 8 23:55:55.638701 kubelet[2470]: E0508 23:55:55.638589 2470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab0db7c3-5af1-4f31-be93-ed9e7967bdb7-kube-api-access-4s9m6 podName:ab0db7c3-5af1-4f31-be93-ed9e7967bdb7 nodeName:}" failed. No retries permitted until 2025-05-08 23:55:56.138568173 +0000 UTC m=+6.817504336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4s9m6" (UniqueName: "kubernetes.io/projected/ab0db7c3-5af1-4f31-be93-ed9e7967bdb7-kube-api-access-4s9m6") pod "kube-proxy-64qpr" (UID: "ab0db7c3-5af1-4f31-be93-ed9e7967bdb7") : configmap "kube-root-ca.crt" not found May 8 23:55:55.646655 kubelet[2470]: E0508 23:55:55.640136 2470 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 23:55:55.646655 kubelet[2470]: E0508 23:55:55.640164 2470 projected.go:194] Error preparing data for projected volume kube-api-access-hkg48 for pod kube-system/cilium-jns2g: configmap "kube-root-ca.crt" not found May 8 23:55:55.646655 kubelet[2470]: E0508 23:55:55.640204 2470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-kube-api-access-hkg48 podName:f110fe08-0872-400a-b06d-eb3d16cf2383 nodeName:}" failed. No retries permitted until 2025-05-08 23:55:56.14019114 +0000 UTC m=+6.819127303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hkg48" (UniqueName: "kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-kube-api-access-hkg48") pod "cilium-jns2g" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383") : configmap "kube-root-ca.crt" not found May 8 23:55:55.847102 systemd[1]: Created slice kubepods-besteffort-podcf729b46_665b_4bf4_9cfe_c1814539cf0a.slice - libcontainer container kubepods-besteffort-podcf729b46_665b_4bf4_9cfe_c1814539cf0a.slice. May 8 23:55:55.928603 kubelet[2470]: I0508 23:55:55.928552 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582b9\" (UniqueName: \"kubernetes.io/projected/cf729b46-665b-4bf4-9cfe-c1814539cf0a-kube-api-access-582b9\") pod \"cilium-operator-6c4d7847fc-xnv9m\" (UID: \"cf729b46-665b-4bf4-9cfe-c1814539cf0a\") " pod="kube-system/cilium-operator-6c4d7847fc-xnv9m" May 8 23:55:55.928603 kubelet[2470]: I0508 23:55:55.928596 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf729b46-665b-4bf4-9cfe-c1814539cf0a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xnv9m\" (UID: \"cf729b46-665b-4bf4-9cfe-c1814539cf0a\") " pod="kube-system/cilium-operator-6c4d7847fc-xnv9m" May 8 23:55:56.151146 kubelet[2470]: E0508 23:55:56.150651 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:56.151926 containerd[1439]: time="2025-05-08T23:55:56.151880595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xnv9m,Uid:cf729b46-665b-4bf4-9cfe-c1814539cf0a,Namespace:kube-system,Attempt:0,}" May 8 23:55:56.171027 containerd[1439]: time="2025-05-08T23:55:56.170903223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:56.171027 containerd[1439]: time="2025-05-08T23:55:56.170961663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:56.171027 containerd[1439]: time="2025-05-08T23:55:56.170973183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:56.171309 containerd[1439]: time="2025-05-08T23:55:56.171058744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:56.197499 systemd[1]: Started cri-containerd-ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a.scope - libcontainer container ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a. May 8 23:55:56.221311 containerd[1439]: time="2025-05-08T23:55:56.221259125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xnv9m,Uid:cf729b46-665b-4bf4-9cfe-c1814539cf0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a\"" May 8 23:55:56.222871 kubelet[2470]: E0508 23:55:56.222539 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:56.224192 containerd[1439]: time="2025-05-08T23:55:56.224161376Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 23:55:56.372770 kubelet[2470]: E0508 23:55:56.372729 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:56.373946 containerd[1439]: time="2025-05-08T23:55:56.373847836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64qpr,Uid:ab0db7c3-5af1-4f31-be93-ed9e7967bdb7,Namespace:kube-system,Attempt:0,}" May 8 23:55:56.378361 kubelet[2470]: E0508 23:55:56.378335 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:56.378923 containerd[1439]: time="2025-05-08T23:55:56.378694814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jns2g,Uid:f110fe08-0872-400a-b06d-eb3d16cf2383,Namespace:kube-system,Attempt:0,}" May 8 23:55:56.393201 containerd[1439]: time="2025-05-08T23:55:56.392950345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:56.393201 containerd[1439]: time="2025-05-08T23:55:56.393022105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:56.393201 containerd[1439]: time="2025-05-08T23:55:56.393037426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:56.393201 containerd[1439]: time="2025-05-08T23:55:56.393113626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:56.399283 containerd[1439]: time="2025-05-08T23:55:56.399045167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:56.399432 containerd[1439]: time="2025-05-08T23:55:56.399332448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:56.399735 containerd[1439]: time="2025-05-08T23:55:56.399558929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:56.400171 containerd[1439]: time="2025-05-08T23:55:56.399991211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:56.411378 systemd[1]: Started cri-containerd-70c0fe7e66f2dba7b760fb88f49294c3c224b0f22cd1c93d678ac97fa2d8fdce.scope - libcontainer container 70c0fe7e66f2dba7b760fb88f49294c3c224b0f22cd1c93d678ac97fa2d8fdce. May 8 23:55:56.413878 systemd[1]: Started cri-containerd-db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee.scope - libcontainer container db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee. May 8 23:55:56.434507 containerd[1439]: time="2025-05-08T23:55:56.434424895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jns2g,Uid:f110fe08-0872-400a-b06d-eb3d16cf2383,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\"" May 8 23:55:56.435534 kubelet[2470]: E0508 23:55:56.435511 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:56.441449 containerd[1439]: time="2025-05-08T23:55:56.441323720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64qpr,Uid:ab0db7c3-5af1-4f31-be93-ed9e7967bdb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"70c0fe7e66f2dba7b760fb88f49294c3c224b0f22cd1c93d678ac97fa2d8fdce\"" May 8 23:55:56.442028 kubelet[2470]: E0508 23:55:56.442010 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:56.445102 containerd[1439]: time="2025-05-08T23:55:56.445063013Z" level=info msg="CreateContainer within sandbox \"70c0fe7e66f2dba7b760fb88f49294c3c224b0f22cd1c93d678ac97fa2d8fdce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 23:55:56.458220 containerd[1439]: time="2025-05-08T23:55:56.458073700Z" level=info msg="CreateContainer within sandbox \"70c0fe7e66f2dba7b760fb88f49294c3c224b0f22cd1c93d678ac97fa2d8fdce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"081e9c5ce65c4971b3758d338968572d6a7c08ec495419295d59bad1e7a05819\"" May 8 23:55:56.458937 containerd[1439]: time="2025-05-08T23:55:56.458907223Z" level=info msg="StartContainer for \"081e9c5ce65c4971b3758d338968572d6a7c08ec495419295d59bad1e7a05819\"" May 8 23:55:56.486468 systemd[1]: Started cri-containerd-081e9c5ce65c4971b3758d338968572d6a7c08ec495419295d59bad1e7a05819.scope - libcontainer container 081e9c5ce65c4971b3758d338968572d6a7c08ec495419295d59bad1e7a05819. May 8 23:55:56.510265 containerd[1439]: time="2025-05-08T23:55:56.510191809Z" level=info msg="StartContainer for \"081e9c5ce65c4971b3758d338968572d6a7c08ec495419295d59bad1e7a05819\" returns successfully" May 8 23:55:57.179095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624439707.mount: Deactivated successfully. May 8 23:55:57.433299 kubelet[2470]: E0508 23:55:57.433181 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:59.104967 kubelet[2470]: E0508 23:55:59.104103 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:55:59.138794 kubelet[2470]: I0508 23:55:59.138732 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-64qpr" podStartSLOduration=4.138712994 podStartE2EDuration="4.138712994s" podCreationTimestamp="2025-05-08 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:55:57.44337872 +0000 UTC m=+8.122314883" watchObservedRunningTime="2025-05-08 23:55:59.138712994 +0000 UTC m=+9.817649157" May 8 23:55:59.436663 kubelet[2470]: E0508 23:55:59.436615 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:00.557267 containerd[1439]: time="2025-05-08T23:56:00.557212911Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:00.557806 containerd[1439]: time="2025-05-08T23:56:00.557777393Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 23:56:00.558480 containerd[1439]: time="2025-05-08T23:56:00.558459475Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:00.562119 containerd[1439]: time="2025-05-08T23:56:00.560569921Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.336355545s" May 8 23:56:00.562119 containerd[1439]: time="2025-05-08T23:56:00.560613761Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 23:56:00.565197 containerd[1439]: time="2025-05-08T23:56:00.565160374Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 23:56:00.567187 containerd[1439]: time="2025-05-08T23:56:00.567036339Z" level=info msg="CreateContainer within sandbox \"ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 23:56:00.580872 containerd[1439]: time="2025-05-08T23:56:00.580819457Z" level=info msg="CreateContainer within sandbox \"ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\"" May 8 23:56:00.582231 containerd[1439]: time="2025-05-08T23:56:00.581362579Z" level=info msg="StartContainer for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\"" May 8 23:56:00.612466 systemd[1]: Started cri-containerd-dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f.scope - libcontainer container dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f. May 8 23:56:00.636407 containerd[1439]: time="2025-05-08T23:56:00.636293732Z" level=info msg="StartContainer for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" returns successfully" May 8 23:56:01.441040 kubelet[2470]: E0508 23:56:01.440994 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:01.456348 kubelet[2470]: I0508 23:56:01.456158 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xnv9m" podStartSLOduration=2.115116222 podStartE2EDuration="6.45613942s" podCreationTimestamp="2025-05-08 23:55:55 +0000 UTC" firstStartedPulling="2025-05-08 23:55:56.223704134 +0000 UTC m=+6.902640297" lastFinishedPulling="2025-05-08 23:56:00.564727332 +0000 UTC m=+11.243663495" observedRunningTime="2025-05-08 23:56:01.45604994 +0000 UTC m=+12.134986063" watchObservedRunningTime="2025-05-08 23:56:01.45613942 +0000 UTC m=+12.135075583" May 8 23:56:02.443336 kubelet[2470]: E0508 23:56:02.443293 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:02.456405 update_engine[1426]: I20250508 23:56:02.456321 1426 update_attempter.cc:509] Updating boot flags... May 8 23:56:02.486288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2907) May 8 23:56:02.536485 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2906) May 8 23:56:02.793314 kubelet[2470]: E0508 23:56:02.793174 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:03.444655 kubelet[2470]: E0508 23:56:03.444264 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:03.759678 kubelet[2470]: E0508 23:56:03.759568 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:04.446027 kubelet[2470]: E0508 23:56:04.445996 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:05.656452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664420031.mount: Deactivated successfully. May 8 23:56:09.960083 containerd[1439]: time="2025-05-08T23:56:09.960018603Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:09.960487 containerd[1439]: time="2025-05-08T23:56:09.960452204Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 23:56:09.961257 containerd[1439]: time="2025-05-08T23:56:09.961213245Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:56:09.962861 containerd[1439]: time="2025-05-08T23:56:09.962825567Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.397623633s" May 8 23:56:09.962909 containerd[1439]: time="2025-05-08T23:56:09.962859327Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 23:56:09.965869 containerd[1439]: time="2025-05-08T23:56:09.965832932Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:56:10.008095 containerd[1439]: time="2025-05-08T23:56:10.008048277Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\"" May 8 23:56:10.009350 containerd[1439]: time="2025-05-08T23:56:10.008528678Z" level=info msg="StartContainer for \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\"" May 8 23:56:10.028093 systemd[1]: run-containerd-runc-k8s.io-6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7-runc.gCI3c7.mount: Deactivated successfully. May 8 23:56:10.038443 systemd[1]: Started cri-containerd-6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7.scope - libcontainer container 6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7. May 8 23:56:10.064004 containerd[1439]: time="2025-05-08T23:56:10.063957479Z" level=info msg="StartContainer for \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\" returns successfully" May 8 23:56:10.120858 systemd[1]: cri-containerd-6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7.scope: Deactivated successfully. May 8 23:56:10.273880 containerd[1439]: time="2025-05-08T23:56:10.268806179Z" level=info msg="shim disconnected" id=6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7 namespace=k8s.io May 8 23:56:10.273880 containerd[1439]: time="2025-05-08T23:56:10.273679426Z" level=warning msg="cleaning up after shim disconnected" id=6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7 namespace=k8s.io May 8 23:56:10.273880 containerd[1439]: time="2025-05-08T23:56:10.273692146Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:10.464398 kubelet[2470]: E0508 23:56:10.464358 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:10.466976 containerd[1439]: time="2025-05-08T23:56:10.466933429Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:56:10.486040 containerd[1439]: time="2025-05-08T23:56:10.485984737Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\"" May 8 23:56:10.488617 containerd[1439]: time="2025-05-08T23:56:10.488588940Z" level=info msg="StartContainer for \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\"" May 8 23:56:10.510401 systemd[1]: Started cri-containerd-c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835.scope - libcontainer container c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835. May 8 23:56:10.529623 containerd[1439]: time="2025-05-08T23:56:10.529473680Z" level=info msg="StartContainer for \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\" returns successfully" May 8 23:56:10.546572 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:56:10.546781 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:56:10.546848 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 23:56:10.553653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:56:10.553837 systemd[1]: cri-containerd-c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835.scope: Deactivated successfully. May 8 23:56:10.570744 containerd[1439]: time="2025-05-08T23:56:10.570666101Z" level=info msg="shim disconnected" id=c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835 namespace=k8s.io May 8 23:56:10.570999 containerd[1439]: time="2025-05-08T23:56:10.570811341Z" level=warning msg="cleaning up after shim disconnected" id=c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835 namespace=k8s.io May 8 23:56:10.570999 containerd[1439]: time="2025-05-08T23:56:10.570822501Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:10.580267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:56:11.004105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7-rootfs.mount: Deactivated successfully. May 8 23:56:11.467670 kubelet[2470]: E0508 23:56:11.467640 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:11.471212 containerd[1439]: time="2025-05-08T23:56:11.471150015Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:56:11.488386 containerd[1439]: time="2025-05-08T23:56:11.488329279Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\"" May 8 23:56:11.489007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176745635.mount: Deactivated successfully. May 8 23:56:11.489507 containerd[1439]: time="2025-05-08T23:56:11.489043280Z" level=info msg="StartContainer for \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\"" May 8 23:56:11.527456 systemd[1]: Started cri-containerd-88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05.scope - libcontainer container 88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05. May 8 23:56:11.562102 containerd[1439]: time="2025-05-08T23:56:11.562061300Z" level=info msg="StartContainer for \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\" returns successfully" May 8 23:56:11.576780 systemd[1]: cri-containerd-88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05.scope: Deactivated successfully. May 8 23:56:11.598036 containerd[1439]: time="2025-05-08T23:56:11.597973829Z" level=info msg="shim disconnected" id=88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05 namespace=k8s.io May 8 23:56:11.598036 containerd[1439]: time="2025-05-08T23:56:11.598030469Z" level=warning msg="cleaning up after shim disconnected" id=88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05 namespace=k8s.io May 8 23:56:11.598036 containerd[1439]: time="2025-05-08T23:56:11.598039349Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:12.004587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05-rootfs.mount: Deactivated successfully. May 8 23:56:12.471517 kubelet[2470]: E0508 23:56:12.471345 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:12.473605 containerd[1439]: time="2025-05-08T23:56:12.473518710Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:56:12.562865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023676033.mount: Deactivated successfully. May 8 23:56:12.564159 containerd[1439]: time="2025-05-08T23:56:12.564040866Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\"" May 8 23:56:12.564705 containerd[1439]: time="2025-05-08T23:56:12.564680827Z" level=info msg="StartContainer for \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\"" May 8 23:56:12.594444 systemd[1]: Started cri-containerd-c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f.scope - libcontainer container c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f. May 8 23:56:12.612469 systemd[1]: cri-containerd-c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f.scope: Deactivated successfully. May 8 23:56:12.615310 containerd[1439]: time="2025-05-08T23:56:12.615266892Z" level=info msg="StartContainer for \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\" returns successfully" May 8 23:56:12.616699 containerd[1439]: time="2025-05-08T23:56:12.616603894Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf110fe08_0872_400a_b06d_eb3d16cf2383.slice/cri-containerd-c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f.scope/memory.events\": no such file or directory" May 8 23:56:12.635423 containerd[1439]: time="2025-05-08T23:56:12.635156278Z" level=info msg="shim disconnected" id=c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f namespace=k8s.io May 8 23:56:12.635423 containerd[1439]: time="2025-05-08T23:56:12.635333238Z" level=warning msg="cleaning up after shim disconnected" id=c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f namespace=k8s.io May 8 23:56:12.635423 containerd[1439]: time="2025-05-08T23:56:12.635343758Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:13.004697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f-rootfs.mount: Deactivated successfully. May 8 23:56:13.476166 kubelet[2470]: E0508 23:56:13.474716 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:13.479209 containerd[1439]: time="2025-05-08T23:56:13.478957164Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:56:13.494782 containerd[1439]: time="2025-05-08T23:56:13.494729983Z" level=info msg="CreateContainer within sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\"" May 8 23:56:13.495183 containerd[1439]: time="2025-05-08T23:56:13.495157504Z" level=info msg="StartContainer for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\"" May 8 23:56:13.525441 systemd[1]: Started cri-containerd-596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d.scope - libcontainer container 596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d. May 8 23:56:13.554543 containerd[1439]: time="2025-05-08T23:56:13.554496855Z" level=info msg="StartContainer for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" returns successfully" May 8 23:56:13.714842 kubelet[2470]: I0508 23:56:13.713015 2470 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 23:56:13.776676 systemd[1]: Created slice kubepods-burstable-pod714caff0_b326_4342_8533_8e82a6e4ab1e.slice - libcontainer container kubepods-burstable-pod714caff0_b326_4342_8533_8e82a6e4ab1e.slice. May 8 23:56:13.783005 systemd[1]: Created slice kubepods-burstable-podf619d500_a333_4e4c_8cf9_68e2d1710486.slice - libcontainer container kubepods-burstable-podf619d500_a333_4e4c_8cf9_68e2d1710486.slice. May 8 23:56:13.853897 kubelet[2470]: I0508 23:56:13.853846 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/714caff0-b326-4342-8533-8e82a6e4ab1e-config-volume\") pod \"coredns-668d6bf9bc-zshbx\" (UID: \"714caff0-b326-4342-8533-8e82a6e4ab1e\") " pod="kube-system/coredns-668d6bf9bc-zshbx" May 8 23:56:13.854253 kubelet[2470]: I0508 23:56:13.854134 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnqc\" (UniqueName: \"kubernetes.io/projected/f619d500-a333-4e4c-8cf9-68e2d1710486-kube-api-access-2bnqc\") pod \"coredns-668d6bf9bc-jfq7r\" (UID: \"f619d500-a333-4e4c-8cf9-68e2d1710486\") " pod="kube-system/coredns-668d6bf9bc-jfq7r" May 8 23:56:13.854253 kubelet[2470]: I0508 23:56:13.854163 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f619d500-a333-4e4c-8cf9-68e2d1710486-config-volume\") pod \"coredns-668d6bf9bc-jfq7r\" (UID: \"f619d500-a333-4e4c-8cf9-68e2d1710486\") " pod="kube-system/coredns-668d6bf9bc-jfq7r" May 8 23:56:13.854253 kubelet[2470]: I0508 23:56:13.854188 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxvv\" (UniqueName: \"kubernetes.io/projected/714caff0-b326-4342-8533-8e82a6e4ab1e-kube-api-access-fmxvv\") pod \"coredns-668d6bf9bc-zshbx\" (UID: \"714caff0-b326-4342-8533-8e82a6e4ab1e\") " pod="kube-system/coredns-668d6bf9bc-zshbx" May 8 23:56:14.080859 kubelet[2470]: E0508 23:56:14.080742 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:14.083216 containerd[1439]: time="2025-05-08T23:56:14.083172967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zshbx,Uid:714caff0-b326-4342-8533-8e82a6e4ab1e,Namespace:kube-system,Attempt:0,}" May 8 23:56:14.085913 kubelet[2470]: E0508 23:56:14.085847 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:14.086858 containerd[1439]: time="2025-05-08T23:56:14.086723571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfq7r,Uid:f619d500-a333-4e4c-8cf9-68e2d1710486,Namespace:kube-system,Attempt:0,}" May 8 23:56:14.478738 kubelet[2470]: E0508 23:56:14.478705 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:15.481012 kubelet[2470]: E0508 23:56:15.480670 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:15.730435 systemd-networkd[1374]: cilium_host: Link UP May 8 23:56:15.730555 systemd-networkd[1374]: cilium_net: Link UP May 8 23:56:15.730557 systemd-networkd[1374]: cilium_net: Gained carrier May 8 23:56:15.730690 systemd-networkd[1374]: cilium_host: Gained carrier May 8 23:56:15.730829 systemd-networkd[1374]: cilium_net: Gained IPv6LL May 8 23:56:15.733070 systemd-networkd[1374]: cilium_host: Gained IPv6LL May 8 23:56:15.819756 systemd-networkd[1374]: cilium_vxlan: Link UP May 8 23:56:15.819762 systemd-networkd[1374]: cilium_vxlan: Gained carrier May 8 23:56:16.185317 kernel: NET: Registered PF_ALG protocol family May 8 23:56:16.482580 kubelet[2470]: E0508 23:56:16.482475 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:16.826251 systemd-networkd[1374]: lxc_health: Link UP May 8 23:56:16.835615 systemd-networkd[1374]: lxc_health: Gained carrier May 8 23:56:17.183018 systemd-networkd[1374]: lxc8f3aaf2114e0: Link UP May 8 23:56:17.193309 kernel: eth0: renamed from tmp991d5 May 8 23:56:17.202474 systemd-networkd[1374]: lxc8f3aaf2114e0: Gained carrier May 8 23:56:17.215359 systemd-networkd[1374]: lxc743b333d4c28: Link UP May 8 23:56:17.225290 kernel: eth0: renamed from tmp6e61f May 8 23:56:17.248590 systemd-networkd[1374]: lxc743b333d4c28: Gained carrier May 8 23:56:17.529797 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL May 8 23:56:17.797084 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:42830.service - OpenSSH per-connection server daemon (10.0.0.1:42830). May 8 23:56:17.837162 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 42830 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:17.839048 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:17.846525 systemd-logind[1422]: New session 8 of user core. May 8 23:56:17.856433 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 23:56:17.984301 sshd[3705]: pam_unix(sshd:session): session closed for user core May 8 23:56:17.987742 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:42830.service: Deactivated successfully. May 8 23:56:17.990962 systemd[1]: session-8.scope: Deactivated successfully. May 8 23:56:17.991786 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. May 8 23:56:17.992991 systemd-logind[1422]: Removed session 8. May 8 23:56:18.391679 kubelet[2470]: E0508 23:56:18.391628 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:18.409822 kubelet[2470]: I0508 23:56:18.407910 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jns2g" podStartSLOduration=9.880220758 podStartE2EDuration="23.407897584s" podCreationTimestamp="2025-05-08 23:55:55 +0000 UTC" firstStartedPulling="2025-05-08 23:55:56.436806744 +0000 UTC m=+7.115742867" lastFinishedPulling="2025-05-08 23:56:09.96448353 +0000 UTC m=+20.643419693" observedRunningTime="2025-05-08 23:56:14.493633551 +0000 UTC m=+25.172569714" watchObservedRunningTime="2025-05-08 23:56:18.407897584 +0000 UTC m=+29.086833747" May 8 23:56:18.487681 systemd-networkd[1374]: lxc8f3aaf2114e0: Gained IPv6LL May 8 23:56:18.550748 systemd-networkd[1374]: lxc743b333d4c28: Gained IPv6LL May 8 23:56:18.678694 systemd-networkd[1374]: lxc_health: Gained IPv6LL May 8 23:56:20.787053 containerd[1439]: time="2025-05-08T23:56:20.786044723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:20.787053 containerd[1439]: time="2025-05-08T23:56:20.786113963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:20.787053 containerd[1439]: time="2025-05-08T23:56:20.786125203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:20.787053 containerd[1439]: time="2025-05-08T23:56:20.786212723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:20.811469 systemd[1]: Started cri-containerd-6e61f2ce69bcf73fed2654da0a58c73267010dc3d864ff0d781aebe4e650b19a.scope - libcontainer container 6e61f2ce69bcf73fed2654da0a58c73267010dc3d864ff0d781aebe4e650b19a. May 8 23:56:20.813471 containerd[1439]: time="2025-05-08T23:56:20.812805023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:56:20.813471 containerd[1439]: time="2025-05-08T23:56:20.813228784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:56:20.813471 containerd[1439]: time="2025-05-08T23:56:20.813280464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:20.813471 containerd[1439]: time="2025-05-08T23:56:20.813409784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:56:20.826621 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:56:20.840482 systemd[1]: Started cri-containerd-991d555e4742565402b47a1d7d83a1450c27e50bb2619bd0f5c474c8a926c26b.scope - libcontainer container 991d555e4742565402b47a1d7d83a1450c27e50bb2619bd0f5c474c8a926c26b. May 8 23:56:20.854164 containerd[1439]: time="2025-05-08T23:56:20.854128455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zshbx,Uid:714caff0-b326-4342-8533-8e82a6e4ab1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e61f2ce69bcf73fed2654da0a58c73267010dc3d864ff0d781aebe4e650b19a\"" May 8 23:56:20.854169 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:56:20.855101 kubelet[2470]: E0508 23:56:20.855075 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:20.858569 containerd[1439]: time="2025-05-08T23:56:20.858526979Z" level=info msg="CreateContainer within sandbox \"6e61f2ce69bcf73fed2654da0a58c73267010dc3d864ff0d781aebe4e650b19a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:56:20.873248 containerd[1439]: time="2025-05-08T23:56:20.873181350Z" level=info msg="CreateContainer within sandbox \"6e61f2ce69bcf73fed2654da0a58c73267010dc3d864ff0d781aebe4e650b19a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02a4daf1b94ceb1b9106603d76a8050d2e9e2454860b90f19aa395e74c708177\"" May 8 23:56:20.875147 containerd[1439]: time="2025-05-08T23:56:20.873731430Z" level=info msg="StartContainer for \"02a4daf1b94ceb1b9106603d76a8050d2e9e2454860b90f19aa395e74c708177\"" May 8 23:56:20.887571 containerd[1439]: time="2025-05-08T23:56:20.887532281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfq7r,Uid:f619d500-a333-4e4c-8cf9-68e2d1710486,Namespace:kube-system,Attempt:0,} returns sandbox id \"991d555e4742565402b47a1d7d83a1450c27e50bb2619bd0f5c474c8a926c26b\"" May 8 23:56:20.891812 kubelet[2470]: E0508 23:56:20.890837 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:20.894222 containerd[1439]: time="2025-05-08T23:56:20.894105246Z" level=info msg="CreateContainer within sandbox \"991d555e4742565402b47a1d7d83a1450c27e50bb2619bd0f5c474c8a926c26b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:56:20.904286 containerd[1439]: time="2025-05-08T23:56:20.904165574Z" level=info msg="CreateContainer within sandbox \"991d555e4742565402b47a1d7d83a1450c27e50bb2619bd0f5c474c8a926c26b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aca6e87a884896ddb4d8b9c23fe607daf7a39d5f555c214e21a3ae52e995b724\"" May 8 23:56:20.905580 containerd[1439]: time="2025-05-08T23:56:20.904601854Z" level=info msg="StartContainer for \"aca6e87a884896ddb4d8b9c23fe607daf7a39d5f555c214e21a3ae52e995b724\"" May 8 23:56:20.907530 systemd[1]: Started cri-containerd-02a4daf1b94ceb1b9106603d76a8050d2e9e2454860b90f19aa395e74c708177.scope - libcontainer container 02a4daf1b94ceb1b9106603d76a8050d2e9e2454860b90f19aa395e74c708177. May 8 23:56:20.934667 systemd[1]: Started cri-containerd-aca6e87a884896ddb4d8b9c23fe607daf7a39d5f555c214e21a3ae52e995b724.scope - libcontainer container aca6e87a884896ddb4d8b9c23fe607daf7a39d5f555c214e21a3ae52e995b724. May 8 23:56:20.943833 containerd[1439]: time="2025-05-08T23:56:20.943792004Z" level=info msg="StartContainer for \"02a4daf1b94ceb1b9106603d76a8050d2e9e2454860b90f19aa395e74c708177\" returns successfully" May 8 23:56:20.961829 containerd[1439]: time="2025-05-08T23:56:20.961782618Z" level=info msg="StartContainer for \"aca6e87a884896ddb4d8b9c23fe607daf7a39d5f555c214e21a3ae52e995b724\" returns successfully" May 8 23:56:21.493023 kubelet[2470]: E0508 23:56:21.492963 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:21.495667 kubelet[2470]: E0508 23:56:21.495583 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:21.516379 kubelet[2470]: I0508 23:56:21.516289 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jfq7r" podStartSLOduration=26.516219859 podStartE2EDuration="26.516219859s" podCreationTimestamp="2025-05-08 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:56:21.505982451 +0000 UTC m=+32.184918694" watchObservedRunningTime="2025-05-08 23:56:21.516219859 +0000 UTC m=+32.195155982" May 8 23:56:21.518021 kubelet[2470]: I0508 23:56:21.517821 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zshbx" podStartSLOduration=26.51781074 podStartE2EDuration="26.51781074s" podCreationTimestamp="2025-05-08 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:56:21.517295419 +0000 UTC m=+32.196231582" watchObservedRunningTime="2025-05-08 23:56:21.51781074 +0000 UTC m=+32.196746903" May 8 23:56:22.497749 kubelet[2470]: E0508 23:56:22.497659 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:22.497749 kubelet[2470]: E0508 23:56:22.497713 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:22.822094 kubelet[2470]: I0508 23:56:22.821881 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 23:56:22.822422 kubelet[2470]: E0508 23:56:22.822355 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:23.003413 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:59020.service - OpenSSH per-connection server daemon (10.0.0.1:59020). May 8 23:56:23.052933 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 59020 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:23.054818 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:23.059898 systemd-logind[1422]: New session 9 of user core. May 8 23:56:23.076449 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 23:56:23.203905 sshd[3899]: pam_unix(sshd:session): session closed for user core May 8 23:56:23.207311 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:59020.service: Deactivated successfully. May 8 23:56:23.209500 systemd[1]: session-9.scope: Deactivated successfully. May 8 23:56:23.211519 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. May 8 23:56:23.212710 systemd-logind[1422]: Removed session 9. May 8 23:56:23.498952 kubelet[2470]: E0508 23:56:23.498828 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:23.498952 kubelet[2470]: E0508 23:56:23.498885 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:23.499415 kubelet[2470]: E0508 23:56:23.499053 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:56:28.218227 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:59026.service - OpenSSH per-connection server daemon (10.0.0.1:59026). May 8 23:56:28.264333 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 59026 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:28.265790 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:28.269685 systemd-logind[1422]: New session 10 of user core. May 8 23:56:28.285447 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 23:56:28.398221 sshd[3917]: pam_unix(sshd:session): session closed for user core May 8 23:56:28.401801 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:59026.service: Deactivated successfully. May 8 23:56:28.403715 systemd[1]: session-10.scope: Deactivated successfully. May 8 23:56:28.404630 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. May 8 23:56:28.405633 systemd-logind[1422]: Removed session 10. May 8 23:56:33.414563 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:57698.service - OpenSSH per-connection server daemon (10.0.0.1:57698). May 8 23:56:33.454133 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 57698 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:33.455584 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:33.459385 systemd-logind[1422]: New session 11 of user core. May 8 23:56:33.469406 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 23:56:33.589807 sshd[3933]: pam_unix(sshd:session): session closed for user core May 8 23:56:33.600935 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:57698.service: Deactivated successfully. May 8 23:56:33.602832 systemd[1]: session-11.scope: Deactivated successfully. May 8 23:56:33.604142 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. May 8 23:56:33.605668 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:57708.service - OpenSSH per-connection server daemon (10.0.0.1:57708). May 8 23:56:33.607104 systemd-logind[1422]: Removed session 11. May 8 23:56:33.649685 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 57708 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:33.651456 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:33.655932 systemd-logind[1422]: New session 12 of user core. May 8 23:56:33.665411 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 23:56:33.839887 sshd[3948]: pam_unix(sshd:session): session closed for user core May 8 23:56:33.852608 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:57708.service: Deactivated successfully. May 8 23:56:33.856580 systemd[1]: session-12.scope: Deactivated successfully. May 8 23:56:33.858522 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. May 8 23:56:33.868621 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:57710.service - OpenSSH per-connection server daemon (10.0.0.1:57710). May 8 23:56:33.869693 systemd-logind[1422]: Removed session 12. May 8 23:56:33.905724 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 57710 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:33.907098 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:33.911706 systemd-logind[1422]: New session 13 of user core. May 8 23:56:33.920427 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 23:56:34.035844 sshd[3960]: pam_unix(sshd:session): session closed for user core May 8 23:56:34.039098 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:57710.service: Deactivated successfully. May 8 23:56:34.042054 systemd[1]: session-13.scope: Deactivated successfully. May 8 23:56:34.042819 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. May 8 23:56:34.043655 systemd-logind[1422]: Removed session 13. May 8 23:56:39.047466 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:57726.service - OpenSSH per-connection server daemon (10.0.0.1:57726). May 8 23:56:39.090157 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 57726 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:39.091661 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:39.096772 systemd-logind[1422]: New session 14 of user core. May 8 23:56:39.106467 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 23:56:39.236674 sshd[3974]: pam_unix(sshd:session): session closed for user core May 8 23:56:39.239990 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:57726.service: Deactivated successfully. May 8 23:56:39.241649 systemd[1]: session-14.scope: Deactivated successfully. May 8 23:56:39.242405 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. May 8 23:56:39.243391 systemd-logind[1422]: Removed session 14. May 8 23:56:44.260539 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:56070.service - OpenSSH per-connection server daemon (10.0.0.1:56070). May 8 23:56:44.293921 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 56070 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:44.295347 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:44.298950 systemd-logind[1422]: New session 15 of user core. May 8 23:56:44.308422 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 23:56:44.426518 sshd[3988]: pam_unix(sshd:session): session closed for user core May 8 23:56:44.435833 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:56070.service: Deactivated successfully. May 8 23:56:44.438815 systemd[1]: session-15.scope: Deactivated successfully. May 8 23:56:44.440765 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. May 8 23:56:44.447527 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:56080.service - OpenSSH per-connection server daemon (10.0.0.1:56080). May 8 23:56:44.448460 systemd-logind[1422]: Removed session 15. May 8 23:56:44.482180 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 56080 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:44.483571 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:44.487799 systemd-logind[1422]: New session 16 of user core. May 8 23:56:44.497451 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 23:56:44.696112 sshd[4002]: pam_unix(sshd:session): session closed for user core May 8 23:56:44.706813 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:56080.service: Deactivated successfully. May 8 23:56:44.709515 systemd[1]: session-16.scope: Deactivated successfully. May 8 23:56:44.710762 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. May 8 23:56:44.723626 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:56090.service - OpenSSH per-connection server daemon (10.0.0.1:56090). May 8 23:56:44.724746 systemd-logind[1422]: Removed session 16. May 8 23:56:44.762328 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:44.763744 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:44.769971 systemd-logind[1422]: New session 17 of user core. May 8 23:56:44.779451 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 23:56:45.503691 sshd[4015]: pam_unix(sshd:session): session closed for user core May 8 23:56:45.510160 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:56090.service: Deactivated successfully. May 8 23:56:45.510691 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. May 8 23:56:45.512388 systemd[1]: session-17.scope: Deactivated successfully. May 8 23:56:45.527671 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:56094.service - OpenSSH per-connection server daemon (10.0.0.1:56094). May 8 23:56:45.529745 systemd-logind[1422]: Removed session 17. May 8 23:56:45.565734 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 56094 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:45.567228 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:45.571200 systemd-logind[1422]: New session 18 of user core. May 8 23:56:45.578441 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 23:56:45.808425 sshd[4036]: pam_unix(sshd:session): session closed for user core May 8 23:56:45.821109 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:56094.service: Deactivated successfully. May 8 23:56:45.823425 systemd[1]: session-18.scope: Deactivated successfully. May 8 23:56:45.824586 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. May 8 23:56:45.832622 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:56102.service - OpenSSH per-connection server daemon (10.0.0.1:56102). May 8 23:56:45.833527 systemd-logind[1422]: Removed session 18. May 8 23:56:45.865346 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 56102 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:45.866705 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:45.870597 systemd-logind[1422]: New session 19 of user core. May 8 23:56:45.884519 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 23:56:45.991404 sshd[4049]: pam_unix(sshd:session): session closed for user core May 8 23:56:45.995632 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:56102.service: Deactivated successfully. May 8 23:56:45.997925 systemd[1]: session-19.scope: Deactivated successfully. May 8 23:56:45.998699 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. May 8 23:56:45.999615 systemd-logind[1422]: Removed session 19. May 8 23:56:51.004356 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:56104.service - OpenSSH per-connection server daemon (10.0.0.1:56104). May 8 23:56:51.041699 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 56104 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:51.042966 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:51.046652 systemd-logind[1422]: New session 20 of user core. May 8 23:56:51.056426 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 23:56:51.164390 sshd[4068]: pam_unix(sshd:session): session closed for user core May 8 23:56:51.167721 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:56104.service: Deactivated successfully. May 8 23:56:51.169545 systemd[1]: session-20.scope: Deactivated successfully. May 8 23:56:51.170164 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. May 8 23:56:51.170963 systemd-logind[1422]: Removed session 20. May 8 23:56:56.177075 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:50266.service - OpenSSH per-connection server daemon (10.0.0.1:50266). May 8 23:56:56.213974 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 50266 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:56:56.215297 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:56:56.218923 systemd-logind[1422]: New session 21 of user core. May 8 23:56:56.230485 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 23:56:56.340483 sshd[4083]: pam_unix(sshd:session): session closed for user core May 8 23:56:56.344394 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:50266.service: Deactivated successfully. May 8 23:56:56.346446 systemd[1]: session-21.scope: Deactivated successfully. May 8 23:56:56.347140 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. May 8 23:56:56.347959 systemd-logind[1422]: Removed session 21. May 8 23:57:01.352198 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:50276.service - OpenSSH per-connection server daemon (10.0.0.1:50276). May 8 23:57:01.388462 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 50276 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:57:01.389737 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:57:01.395330 systemd-logind[1422]: New session 22 of user core. May 8 23:57:01.402862 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 23:57:01.513882 sshd[4099]: pam_unix(sshd:session): session closed for user core May 8 23:57:01.532840 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:50276.service: Deactivated successfully. May 8 23:57:01.534536 systemd[1]: session-22.scope: Deactivated successfully. May 8 23:57:01.537617 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. May 8 23:57:01.538937 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:50282.service - OpenSSH per-connection server daemon (10.0.0.1:50282). May 8 23:57:01.539729 systemd-logind[1422]: Removed session 22. May 8 23:57:01.594102 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 50282 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:57:01.595442 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:57:01.599312 systemd-logind[1422]: New session 23 of user core. May 8 23:57:01.609385 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 23:57:04.113287 containerd[1439]: time="2025-05-08T23:57:04.112661048Z" level=info msg="StopContainer for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" with timeout 30 (s)" May 8 23:57:04.114350 containerd[1439]: time="2025-05-08T23:57:04.113977737Z" level=info msg="Stop container \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" with signal terminated" May 8 23:57:04.126480 systemd[1]: cri-containerd-dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f.scope: Deactivated successfully. May 8 23:57:04.148382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f-rootfs.mount: Deactivated successfully. May 8 23:57:04.152089 containerd[1439]: time="2025-05-08T23:57:04.151996161Z" level=info msg="StopContainer for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" with timeout 2 (s)" May 8 23:57:04.152508 containerd[1439]: time="2025-05-08T23:57:04.152482364Z" level=info msg="Stop container \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" with signal terminated" May 8 23:57:04.152837 containerd[1439]: time="2025-05-08T23:57:04.152762566Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:57:04.155678 containerd[1439]: time="2025-05-08T23:57:04.155517025Z" level=info msg="shim disconnected" id=dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f namespace=k8s.io May 8 23:57:04.155678 containerd[1439]: time="2025-05-08T23:57:04.155555986Z" level=warning msg="cleaning up after shim disconnected" id=dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f namespace=k8s.io May 8 23:57:04.155678 containerd[1439]: time="2025-05-08T23:57:04.155564226Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:04.159388 systemd-networkd[1374]: lxc_health: Link DOWN May 8 23:57:04.159692 systemd-networkd[1374]: lxc_health: Lost carrier May 8 23:57:04.184925 systemd[1]: cri-containerd-596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d.scope: Deactivated successfully. May 8 23:57:04.185180 systemd[1]: cri-containerd-596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d.scope: Consumed 6.665s CPU time. May 8 23:57:04.203148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d-rootfs.mount: Deactivated successfully. May 8 23:57:04.204073 containerd[1439]: time="2025-05-08T23:57:04.204034642Z" level=info msg="StopContainer for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" returns successfully" May 8 23:57:04.204883 containerd[1439]: time="2025-05-08T23:57:04.204850127Z" level=info msg="StopPodSandbox for \"ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a\"" May 8 23:57:04.205010 containerd[1439]: time="2025-05-08T23:57:04.204930648Z" level=info msg="Container to stop \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:57:04.206965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a-shm.mount: Deactivated successfully. May 8 23:57:04.211767 containerd[1439]: time="2025-05-08T23:57:04.211429893Z" level=info msg="shim disconnected" id=596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d namespace=k8s.io May 8 23:57:04.211767 containerd[1439]: time="2025-05-08T23:57:04.211493774Z" level=warning msg="cleaning up after shim disconnected" id=596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d namespace=k8s.io May 8 23:57:04.211767 containerd[1439]: time="2025-05-08T23:57:04.211502614Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:04.213704 systemd[1]: cri-containerd-ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a.scope: Deactivated successfully. May 8 23:57:04.233221 containerd[1439]: time="2025-05-08T23:57:04.233170884Z" level=info msg="StopContainer for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" returns successfully" May 8 23:57:04.233947 containerd[1439]: time="2025-05-08T23:57:04.233920649Z" level=info msg="StopPodSandbox for \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\"" May 8 23:57:04.235218 containerd[1439]: time="2025-05-08T23:57:04.233960689Z" level=info msg="Container to stop \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:57:04.235218 containerd[1439]: time="2025-05-08T23:57:04.233975289Z" level=info msg="Container to stop \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:57:04.235218 containerd[1439]: time="2025-05-08T23:57:04.233985650Z" level=info msg="Container to stop \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:57:04.235218 containerd[1439]: time="2025-05-08T23:57:04.233995810Z" level=info msg="Container to stop \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:57:04.235218 containerd[1439]: time="2025-05-08T23:57:04.234005770Z" level=info msg="Container to stop \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:57:04.237029 containerd[1439]: time="2025-05-08T23:57:04.236977310Z" level=info msg="shim disconnected" id=ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a namespace=k8s.io May 8 23:57:04.237584 containerd[1439]: time="2025-05-08T23:57:04.237440633Z" level=warning msg="cleaning up after shim disconnected" id=ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a namespace=k8s.io May 8 23:57:04.237584 containerd[1439]: time="2025-05-08T23:57:04.237463554Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:04.240108 systemd[1]: cri-containerd-db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee.scope: Deactivated successfully. May 8 23:57:04.252219 containerd[1439]: time="2025-05-08T23:57:04.252160696Z" level=info msg="TearDown network for sandbox \"ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a\" successfully" May 8 23:57:04.252219 containerd[1439]: time="2025-05-08T23:57:04.252195136Z" level=info msg="StopPodSandbox for \"ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a\" returns successfully" May 8 23:57:04.269669 containerd[1439]: time="2025-05-08T23:57:04.269613977Z" level=info msg="shim disconnected" id=db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee namespace=k8s.io May 8 23:57:04.270008 containerd[1439]: time="2025-05-08T23:57:04.269853778Z" level=warning msg="cleaning up after shim disconnected" id=db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee namespace=k8s.io May 8 23:57:04.270008 containerd[1439]: time="2025-05-08T23:57:04.269869578Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:04.279795 containerd[1439]: time="2025-05-08T23:57:04.279734247Z" level=info msg="TearDown network for sandbox \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" successfully" May 8 23:57:04.279795 containerd[1439]: time="2025-05-08T23:57:04.279773007Z" level=info msg="StopPodSandbox for \"db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee\" returns successfully" May 8 23:57:04.358796 kubelet[2470]: I0508 23:57:04.358570 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-hostproc\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.358796 kubelet[2470]: I0508 23:57:04.358617 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf729b46-665b-4bf4-9cfe-c1814539cf0a-cilium-config-path\") pod \"cf729b46-665b-4bf4-9cfe-c1814539cf0a\" (UID: \"cf729b46-665b-4bf4-9cfe-c1814539cf0a\") " May 8 23:57:04.358796 kubelet[2470]: I0508 23:57:04.358635 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-run\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.358796 kubelet[2470]: I0508 23:57:04.358649 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-bpf-maps\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.358796 kubelet[2470]: I0508 23:57:04.358667 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-config-path\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.358796 kubelet[2470]: I0508 23:57:04.358682 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cni-path\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359325 kubelet[2470]: I0508 23:57:04.358701 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-hubble-tls\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359325 kubelet[2470]: I0508 23:57:04.358722 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-xtables-lock\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359325 kubelet[2470]: I0508 23:57:04.358738 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-lib-modules\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359325 kubelet[2470]: I0508 23:57:04.358753 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkg48\" (UniqueName: \"kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-kube-api-access-hkg48\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359325 kubelet[2470]: I0508 23:57:04.358767 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-net\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359325 kubelet[2470]: I0508 23:57:04.358784 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-582b9\" (UniqueName: \"kubernetes.io/projected/cf729b46-665b-4bf4-9cfe-c1814539cf0a-kube-api-access-582b9\") pod \"cf729b46-665b-4bf4-9cfe-c1814539cf0a\" (UID: \"cf729b46-665b-4bf4-9cfe-c1814539cf0a\") " May 8 23:57:04.359455 kubelet[2470]: I0508 23:57:04.358831 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f110fe08-0872-400a-b06d-eb3d16cf2383-clustermesh-secrets\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359455 kubelet[2470]: I0508 23:57:04.358846 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-kernel\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359455 kubelet[2470]: I0508 23:57:04.358861 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-etc-cni-netd\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.359455 kubelet[2470]: I0508 23:57:04.358876 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-cgroup\") pod \"f110fe08-0872-400a-b06d-eb3d16cf2383\" (UID: \"f110fe08-0872-400a-b06d-eb3d16cf2383\") " May 8 23:57:04.360740 kubelet[2470]: I0508 23:57:04.360142 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.360740 kubelet[2470]: I0508 23:57:04.360192 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cni-path" (OuterVolumeSpecName: "cni-path") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.360740 kubelet[2470]: I0508 23:57:04.360707 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-hostproc" (OuterVolumeSpecName: "hostproc") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.360817 kubelet[2470]: I0508 23:57:04.360743 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.364311 kubelet[2470]: I0508 23:57:04.361484 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.364311 kubelet[2470]: I0508 23:57:04.361532 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.364311 kubelet[2470]: I0508 23:57:04.361551 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.364311 kubelet[2470]: I0508 23:57:04.361571 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.364311 kubelet[2470]: I0508 23:57:04.362715 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf729b46-665b-4bf4-9cfe-c1814539cf0a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf729b46-665b-4bf4-9cfe-c1814539cf0a" (UID: "cf729b46-665b-4bf4-9cfe-c1814539cf0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 23:57:04.364505 kubelet[2470]: I0508 23:57:04.363959 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 23:57:04.364505 kubelet[2470]: I0508 23:57:04.364017 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.364505 kubelet[2470]: I0508 23:57:04.364040 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:57:04.366641 kubelet[2470]: I0508 23:57:04.366615 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf729b46-665b-4bf4-9cfe-c1814539cf0a-kube-api-access-582b9" (OuterVolumeSpecName: "kube-api-access-582b9") pod "cf729b46-665b-4bf4-9cfe-c1814539cf0a" (UID: "cf729b46-665b-4bf4-9cfe-c1814539cf0a"). InnerVolumeSpecName "kube-api-access-582b9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 23:57:04.367618 kubelet[2470]: I0508 23:57:04.367589 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f110fe08-0872-400a-b06d-eb3d16cf2383-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 23:57:04.367770 kubelet[2470]: I0508 23:57:04.367722 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-kube-api-access-hkg48" (OuterVolumeSpecName: "kube-api-access-hkg48") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "kube-api-access-hkg48". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 23:57:04.367815 kubelet[2470]: I0508 23:57:04.367725 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f110fe08-0872-400a-b06d-eb3d16cf2383" (UID: "f110fe08-0872-400a-b06d-eb3d16cf2383"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459330 2470 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459362 2470 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459371 2470 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkg48\" (UniqueName: \"kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-kube-api-access-hkg48\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459382 2470 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459392 2470 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-582b9\" (UniqueName: \"kubernetes.io/projected/cf729b46-665b-4bf4-9cfe-c1814539cf0a-kube-api-access-582b9\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459401 2470 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f110fe08-0872-400a-b06d-eb3d16cf2383-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459410 2470 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459499 kubelet[2470]: I0508 23:57:04.459418 2470 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459426 2470 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459435 2470 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459443 2470 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459451 2470 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459459 2470 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f110fe08-0872-400a-b06d-eb3d16cf2383-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459466 2470 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf729b46-665b-4bf4-9cfe-c1814539cf0a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459474 2470 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f110fe08-0872-400a-b06d-eb3d16cf2383-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.459781 kubelet[2470]: I0508 23:57:04.459481 2470 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f110fe08-0872-400a-b06d-eb3d16cf2383-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 23:57:04.468415 kubelet[2470]: E0508 23:57:04.468372 2470 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:57:04.579952 kubelet[2470]: I0508 23:57:04.579830 2470 scope.go:117] "RemoveContainer" containerID="dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f" May 8 23:57:04.581627 containerd[1439]: time="2025-05-08T23:57:04.581468699Z" level=info msg="RemoveContainer for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\"" May 8 23:57:04.586163 systemd[1]: Removed slice kubepods-besteffort-podcf729b46_665b_4bf4_9cfe_c1814539cf0a.slice - libcontainer container kubepods-besteffort-podcf729b46_665b_4bf4_9cfe_c1814539cf0a.slice. May 8 23:57:04.588067 containerd[1439]: time="2025-05-08T23:57:04.588033064Z" level=info msg="RemoveContainer for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" returns successfully" May 8 23:57:04.588450 kubelet[2470]: I0508 23:57:04.588221 2470 scope.go:117] "RemoveContainer" containerID="dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f" May 8 23:57:04.590480 containerd[1439]: time="2025-05-08T23:57:04.588626628Z" level=error msg="ContainerStatus for \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\": not found" May 8 23:57:04.593360 systemd[1]: Removed slice kubepods-burstable-podf110fe08_0872_400a_b06d_eb3d16cf2383.slice - libcontainer container kubepods-burstable-podf110fe08_0872_400a_b06d_eb3d16cf2383.slice. May 8 23:57:04.593618 systemd[1]: kubepods-burstable-podf110fe08_0872_400a_b06d_eb3d16cf2383.slice: Consumed 6.808s CPU time. May 8 23:57:04.597630 kubelet[2470]: E0508 23:57:04.597495 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\": not found" containerID="dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f" May 8 23:57:04.597630 kubelet[2470]: I0508 23:57:04.597533 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f"} err="failed to get container status \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd68e5056e29ea3cc9d7fc03c7306dab510ed90ddeb80427e411354d8e52d71f\": not found" May 8 23:57:04.597630 kubelet[2470]: I0508 23:57:04.597615 2470 scope.go:117] "RemoveContainer" containerID="596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d" May 8 23:57:04.599811 containerd[1439]: time="2025-05-08T23:57:04.599650945Z" level=info msg="RemoveContainer for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\"" May 8 23:57:04.602869 containerd[1439]: time="2025-05-08T23:57:04.602831687Z" level=info msg="RemoveContainer for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" returns successfully" May 8 23:57:04.603656 kubelet[2470]: I0508 23:57:04.603636 2470 scope.go:117] "RemoveContainer" containerID="c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f" May 8 23:57:04.608251 containerd[1439]: time="2025-05-08T23:57:04.608197644Z" level=info msg="RemoveContainer for \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\"" May 8 23:57:04.614506 containerd[1439]: time="2025-05-08T23:57:04.614416927Z" level=info msg="RemoveContainer for \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\" returns successfully" May 8 23:57:04.614913 kubelet[2470]: I0508 23:57:04.614889 2470 scope.go:117] "RemoveContainer" containerID="88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05" May 8 23:57:04.616968 containerd[1439]: time="2025-05-08T23:57:04.616686423Z" level=info msg="RemoveContainer for \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\"" May 8 23:57:04.622348 containerd[1439]: time="2025-05-08T23:57:04.622257782Z" level=info msg="RemoveContainer for \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\" returns successfully" May 8 23:57:04.622506 kubelet[2470]: I0508 23:57:04.622457 2470 scope.go:117] "RemoveContainer" containerID="c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835" May 8 23:57:04.632745 containerd[1439]: time="2025-05-08T23:57:04.632709814Z" level=info msg="RemoveContainer for \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\"" May 8 23:57:04.636005 containerd[1439]: time="2025-05-08T23:57:04.635967397Z" level=info msg="RemoveContainer for \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\" returns successfully" May 8 23:57:04.636206 kubelet[2470]: I0508 23:57:04.636173 2470 scope.go:117] "RemoveContainer" containerID="6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7" May 8 23:57:04.637221 containerd[1439]: time="2025-05-08T23:57:04.637198565Z" level=info msg="RemoveContainer for \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\"" May 8 23:57:04.639713 containerd[1439]: time="2025-05-08T23:57:04.639677662Z" level=info msg="RemoveContainer for \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\" returns successfully" May 8 23:57:04.639871 kubelet[2470]: I0508 23:57:04.639834 2470 scope.go:117] "RemoveContainer" containerID="596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d" May 8 23:57:04.640046 containerd[1439]: time="2025-05-08T23:57:04.640012305Z" level=error msg="ContainerStatus for \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\": not found" May 8 23:57:04.640179 kubelet[2470]: E0508 23:57:04.640155 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\": not found" containerID="596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d" May 8 23:57:04.640212 kubelet[2470]: I0508 23:57:04.640186 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d"} err="failed to get container status \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\": rpc error: code = NotFound desc = an error occurred when try to find container \"596c50f576481f55d9f902d2033564787c4abbf431da3461eef2d6f0f9f8f01d\": not found" May 8 23:57:04.640212 kubelet[2470]: I0508 23:57:04.640208 2470 scope.go:117] "RemoveContainer" containerID="c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f" May 8 23:57:04.640423 containerd[1439]: time="2025-05-08T23:57:04.640392347Z" level=error msg="ContainerStatus for \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\": not found" May 8 23:57:04.640563 kubelet[2470]: E0508 23:57:04.640525 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\": not found" containerID="c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f" May 8 23:57:04.640563 kubelet[2470]: I0508 23:57:04.640553 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f"} err="failed to get container status \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1e728d603edafb455061adb421fcecdd3c6c9637c4e61d210f207976578254f\": not found" May 8 23:57:04.640622 kubelet[2470]: I0508 23:57:04.640573 2470 scope.go:117] "RemoveContainer" containerID="88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05" May 8 23:57:04.640766 containerd[1439]: time="2025-05-08T23:57:04.640735630Z" level=error msg="ContainerStatus for \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\": not found" May 8 23:57:04.640849 kubelet[2470]: E0508 23:57:04.640832 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\": not found" containerID="88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05" May 8 23:57:04.640883 kubelet[2470]: I0508 23:57:04.640853 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05"} err="failed to get container status \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\": rpc error: code = NotFound desc = an error occurred when try to find container \"88834ccb3f34f12bfe5a7af583708857c4e33b89ba7abe80e170233fb079ff05\": not found" May 8 23:57:04.640883 kubelet[2470]: I0508 23:57:04.640867 2470 scope.go:117] "RemoveContainer" containerID="c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835" May 8 23:57:04.641134 containerd[1439]: time="2025-05-08T23:57:04.641066472Z" level=error msg="ContainerStatus for \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\": not found" May 8 23:57:04.641210 kubelet[2470]: E0508 23:57:04.641188 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\": not found" containerID="c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835" May 8 23:57:04.641280 kubelet[2470]: I0508 23:57:04.641230 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835"} err="failed to get container status \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\": rpc error: code = NotFound desc = an error occurred when try to find container \"c38e27699f9b13eb2c565c53acf17ee03b36c712b6099b4da0ff575a6b909835\": not found" May 8 23:57:04.641324 kubelet[2470]: I0508 23:57:04.641281 2470 scope.go:117] "RemoveContainer" containerID="6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7" May 8 23:57:04.641477 containerd[1439]: time="2025-05-08T23:57:04.641450915Z" level=error msg="ContainerStatus for \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\": not found" May 8 23:57:04.641580 kubelet[2470]: E0508 23:57:04.641560 2470 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\": not found" containerID="6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7" May 8 23:57:04.641619 kubelet[2470]: I0508 23:57:04.641584 2470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7"} err="failed to get container status \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d6afc1ed31a9a7aa02ff7ad995e40349ae867d4c4afd63164947f92b08c99a7\": not found" May 8 23:57:05.129855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee-rootfs.mount: Deactivated successfully. May 8 23:57:05.129963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db6271e2846331d1f91d170784d439df28e35640a5e6f54576590bc83edc40ee-shm.mount: Deactivated successfully. May 8 23:57:05.130018 systemd[1]: var-lib-kubelet-pods-f110fe08\x2d0872\x2d400a\x2db06d\x2deb3d16cf2383-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkg48.mount: Deactivated successfully. May 8 23:57:05.130070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ced0615b54d15532bd5f1d3bd6374e958a1cf561706bfbd9d31ebd21f75ee40a-rootfs.mount: Deactivated successfully. May 8 23:57:05.130132 systemd[1]: var-lib-kubelet-pods-cf729b46\x2d665b\x2d4bf4\x2d9cfe\x2dc1814539cf0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d582b9.mount: Deactivated successfully. May 8 23:57:05.130183 systemd[1]: var-lib-kubelet-pods-f110fe08\x2d0872\x2d400a\x2db06d\x2deb3d16cf2383-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 23:57:05.130231 systemd[1]: var-lib-kubelet-pods-f110fe08\x2d0872\x2d400a\x2db06d\x2deb3d16cf2383-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 23:57:05.420352 kubelet[2470]: I0508 23:57:05.420299 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf729b46-665b-4bf4-9cfe-c1814539cf0a" path="/var/lib/kubelet/pods/cf729b46-665b-4bf4-9cfe-c1814539cf0a/volumes" May 8 23:57:05.420750 kubelet[2470]: I0508 23:57:05.420706 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f110fe08-0872-400a-b06d-eb3d16cf2383" path="/var/lib/kubelet/pods/f110fe08-0872-400a-b06d-eb3d16cf2383/volumes" May 8 23:57:06.060321 sshd[4113]: pam_unix(sshd:session): session closed for user core May 8 23:57:06.071788 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:50282.service: Deactivated successfully. May 8 23:57:06.073395 systemd[1]: session-23.scope: Deactivated successfully. May 8 23:57:06.073576 systemd[1]: session-23.scope: Consumed 1.818s CPU time. May 8 23:57:06.075010 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. May 8 23:57:06.080502 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:60240.service - OpenSSH per-connection server daemon (10.0.0.1:60240). May 8 23:57:06.081417 systemd-logind[1422]: Removed session 23. May 8 23:57:06.115251 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 60240 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:57:06.116720 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:57:06.120560 systemd-logind[1422]: New session 24 of user core. May 8 23:57:06.130426 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 23:57:06.409023 kubelet[2470]: E0508 23:57:06.408563 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:06.957570 sshd[4273]: pam_unix(sshd:session): session closed for user core May 8 23:57:06.965154 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:60240.service: Deactivated successfully. May 8 23:57:06.968058 systemd[1]: session-24.scope: Deactivated successfully. May 8 23:57:06.970759 kubelet[2470]: I0508 23:57:06.970225 2470 memory_manager.go:355] "RemoveStaleState removing state" podUID="cf729b46-665b-4bf4-9cfe-c1814539cf0a" containerName="cilium-operator" May 8 23:57:06.970759 kubelet[2470]: I0508 23:57:06.970290 2470 memory_manager.go:355] "RemoveStaleState removing state" podUID="f110fe08-0872-400a-b06d-eb3d16cf2383" containerName="cilium-agent" May 8 23:57:06.977389 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. May 8 23:57:06.979524 kubelet[2470]: I0508 23:57:06.979499 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afcdcf27-6825-4ea3-9546-fdf9e85a2022-cilium-config-path\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.979729 kubelet[2470]: I0508 23:57:06.979657 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-hostproc\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.979729 kubelet[2470]: I0508 23:57:06.979688 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4p4\" (UniqueName: \"kubernetes.io/projected/afcdcf27-6825-4ea3-9546-fdf9e85a2022-kube-api-access-5j4p4\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.979896 kubelet[2470]: I0508 23:57:06.979710 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-cilium-run\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.979896 kubelet[2470]: I0508 23:57:06.979829 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-etc-cni-netd\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.979896 kubelet[2470]: I0508 23:57:06.979850 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-cilium-cgroup\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.979896 kubelet[2470]: I0508 23:57:06.979865 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-cni-path\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980095 kubelet[2470]: I0508 23:57:06.979984 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-lib-modules\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980095 kubelet[2470]: I0508 23:57:06.980005 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afcdcf27-6825-4ea3-9546-fdf9e85a2022-clustermesh-secrets\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980095 kubelet[2470]: I0508 23:57:06.980021 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-host-proc-sys-net\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980354 kubelet[2470]: I0508 23:57:06.980135 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-host-proc-sys-kernel\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980354 kubelet[2470]: I0508 23:57:06.980163 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/afcdcf27-6825-4ea3-9546-fdf9e85a2022-cilium-ipsec-secrets\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980354 kubelet[2470]: I0508 23:57:06.980180 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-bpf-maps\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980354 kubelet[2470]: I0508 23:57:06.980196 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afcdcf27-6825-4ea3-9546-fdf9e85a2022-xtables-lock\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.980354 kubelet[2470]: I0508 23:57:06.980302 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afcdcf27-6825-4ea3-9546-fdf9e85a2022-hubble-tls\") pod \"cilium-kpwbh\" (UID: \"afcdcf27-6825-4ea3-9546-fdf9e85a2022\") " pod="kube-system/cilium-kpwbh" May 8 23:57:06.987033 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:60252.service - OpenSSH per-connection server daemon (10.0.0.1:60252). May 8 23:57:06.990859 systemd-logind[1422]: Removed session 24. May 8 23:57:06.999179 systemd[1]: Created slice kubepods-burstable-podafcdcf27_6825_4ea3_9546_fdf9e85a2022.slice - libcontainer container kubepods-burstable-podafcdcf27_6825_4ea3_9546_fdf9e85a2022.slice. May 8 23:57:07.023977 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 60252 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:57:07.024550 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:57:07.029041 systemd-logind[1422]: New session 25 of user core. May 8 23:57:07.041460 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 23:57:07.099180 sshd[4287]: pam_unix(sshd:session): session closed for user core May 8 23:57:07.108656 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:60252.service: Deactivated successfully. May 8 23:57:07.111573 systemd[1]: session-25.scope: Deactivated successfully. May 8 23:57:07.112904 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. May 8 23:57:07.114183 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:60254.service - OpenSSH per-connection server daemon (10.0.0.1:60254). May 8 23:57:07.115078 systemd-logind[1422]: Removed session 25. May 8 23:57:07.151403 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 60254 ssh2: RSA SHA256:FYCv7MddxRJ04VoyXdzc4EtAmK38lsK0g0VE7murXbA May 8 23:57:07.152827 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:57:07.156606 systemd-logind[1422]: New session 26 of user core. May 8 23:57:07.163445 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 23:57:07.304677 kubelet[2470]: E0508 23:57:07.304568 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:07.305195 containerd[1439]: time="2025-05-08T23:57:07.305153872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpwbh,Uid:afcdcf27-6825-4ea3-9546-fdf9e85a2022,Namespace:kube-system,Attempt:0,}" May 8 23:57:07.326410 containerd[1439]: time="2025-05-08T23:57:07.326291967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:57:07.326410 containerd[1439]: time="2025-05-08T23:57:07.326350848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:57:07.326410 containerd[1439]: time="2025-05-08T23:57:07.326366048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:57:07.326658 containerd[1439]: time="2025-05-08T23:57:07.326450728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:57:07.355503 systemd[1]: Started cri-containerd-a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f.scope - libcontainer container a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f. May 8 23:57:07.375637 containerd[1439]: time="2025-05-08T23:57:07.375596202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpwbh,Uid:afcdcf27-6825-4ea3-9546-fdf9e85a2022,Namespace:kube-system,Attempt:0,} returns sandbox id \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\"" May 8 23:57:07.376494 kubelet[2470]: E0508 23:57:07.376469 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:07.378705 containerd[1439]: time="2025-05-08T23:57:07.378610622Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:57:07.389038 containerd[1439]: time="2025-05-08T23:57:07.388985448Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700\"" May 8 23:57:07.389545 containerd[1439]: time="2025-05-08T23:57:07.389477491Z" level=info msg="StartContainer for \"6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700\"" May 8 23:57:07.411429 systemd[1]: Started cri-containerd-6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700.scope - libcontainer container 6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700. May 8 23:57:07.432476 containerd[1439]: time="2025-05-08T23:57:07.432421886Z" level=info msg="StartContainer for \"6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700\" returns successfully" May 8 23:57:07.443031 systemd[1]: cri-containerd-6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700.scope: Deactivated successfully. May 8 23:57:07.473477 containerd[1439]: time="2025-05-08T23:57:07.473344548Z" level=info msg="shim disconnected" id=6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700 namespace=k8s.io May 8 23:57:07.473477 containerd[1439]: time="2025-05-08T23:57:07.473398988Z" level=warning msg="cleaning up after shim disconnected" id=6cd7089c5a74575b6ec954be687ce793378ff9129b17ff93cf40d500942e8700 namespace=k8s.io May 8 23:57:07.473477 containerd[1439]: time="2025-05-08T23:57:07.473408388Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:07.602654 kubelet[2470]: E0508 23:57:07.602198 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:07.605035 containerd[1439]: time="2025-05-08T23:57:07.604997830Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:57:07.616254 containerd[1439]: time="2025-05-08T23:57:07.616197021Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9\"" May 8 23:57:07.616834 containerd[1439]: time="2025-05-08T23:57:07.616808945Z" level=info msg="StartContainer for \"bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9\"" May 8 23:57:07.651449 systemd[1]: Started cri-containerd-bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9.scope - libcontainer container bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9. May 8 23:57:07.674356 containerd[1439]: time="2025-05-08T23:57:07.674310193Z" level=info msg="StartContainer for \"bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9\" returns successfully" May 8 23:57:07.681691 systemd[1]: cri-containerd-bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9.scope: Deactivated successfully. May 8 23:57:07.704186 containerd[1439]: time="2025-05-08T23:57:07.704120064Z" level=info msg="shim disconnected" id=bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9 namespace=k8s.io May 8 23:57:07.704186 containerd[1439]: time="2025-05-08T23:57:07.704172304Z" level=warning msg="cleaning up after shim disconnected" id=bade6eb907fa89ca457320cc62a2969cc45930c54f250f0fd38e9f00b8f917c9 namespace=k8s.io May 8 23:57:07.704186 containerd[1439]: time="2025-05-08T23:57:07.704181144Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:08.408031 kubelet[2470]: E0508 23:57:08.407954 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:08.606692 kubelet[2470]: E0508 23:57:08.605676 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:08.608545 containerd[1439]: time="2025-05-08T23:57:08.608499667Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:57:08.623793 containerd[1439]: time="2025-05-08T23:57:08.623645602Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30\"" May 8 23:57:08.624796 containerd[1439]: time="2025-05-08T23:57:08.624627808Z" level=info msg="StartContainer for \"167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30\"" May 8 23:57:08.659475 systemd[1]: Started cri-containerd-167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30.scope - libcontainer container 167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30. May 8 23:57:08.685318 systemd[1]: cri-containerd-167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30.scope: Deactivated successfully. May 8 23:57:08.686400 containerd[1439]: time="2025-05-08T23:57:08.686183031Z" level=info msg="StartContainer for \"167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30\" returns successfully" May 8 23:57:08.723733 containerd[1439]: time="2025-05-08T23:57:08.723664425Z" level=info msg="shim disconnected" id=167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30 namespace=k8s.io May 8 23:57:08.723733 containerd[1439]: time="2025-05-08T23:57:08.723728865Z" level=warning msg="cleaning up after shim disconnected" id=167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30 namespace=k8s.io May 8 23:57:08.723733 containerd[1439]: time="2025-05-08T23:57:08.723738065Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:09.084854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-167896cc3e4869b501c04abdaf24ea52d90a6dca49ea586631f12385d6fddb30-rootfs.mount: Deactivated successfully. May 8 23:57:09.469579 kubelet[2470]: E0508 23:57:09.469537 2470 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:57:09.614270 kubelet[2470]: E0508 23:57:09.613878 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:09.617006 containerd[1439]: time="2025-05-08T23:57:09.616959007Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:57:09.639886 containerd[1439]: time="2025-05-08T23:57:09.639522024Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f\"" May 8 23:57:09.640014 containerd[1439]: time="2025-05-08T23:57:09.639976147Z" level=info msg="StartContainer for \"4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f\"" May 8 23:57:09.670496 systemd[1]: Started cri-containerd-4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f.scope - libcontainer container 4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f. May 8 23:57:09.695616 systemd[1]: cri-containerd-4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f.scope: Deactivated successfully. May 8 23:57:09.705521 containerd[1439]: time="2025-05-08T23:57:09.705156822Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafcdcf27_6825_4ea3_9546_fdf9e85a2022.slice/cri-containerd-4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f.scope/memory.events\": no such file or directory" May 8 23:57:09.714801 containerd[1439]: time="2025-05-08T23:57:09.713212551Z" level=info msg="StartContainer for \"4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f\" returns successfully" May 8 23:57:09.736267 containerd[1439]: time="2025-05-08T23:57:09.736103490Z" level=info msg="shim disconnected" id=4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f namespace=k8s.io May 8 23:57:09.736267 containerd[1439]: time="2025-05-08T23:57:09.736168410Z" level=warning msg="cleaning up after shim disconnected" id=4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f namespace=k8s.io May 8 23:57:09.736267 containerd[1439]: time="2025-05-08T23:57:09.736180331Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:57:10.086033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0036dffa3594924f141bac4ee503a0bc05eff078ed1086cd4b04e1863bdd8f-rootfs.mount: Deactivated successfully. May 8 23:57:10.618901 kubelet[2470]: E0508 23:57:10.618866 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:10.622569 containerd[1439]: time="2025-05-08T23:57:10.622342967Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:57:10.644260 containerd[1439]: time="2025-05-08T23:57:10.644191536Z" level=info msg="CreateContainer within sandbox \"a440d3902aecd69d81a0d3998e559f62b7fbe59247e4dd47a63aa08be141179f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be0b5114c2ebd1a6b810ff2e5c31df8fefe326700e00dedf65289c91ded0968a\"" May 8 23:57:10.646666 containerd[1439]: time="2025-05-08T23:57:10.646607430Z" level=info msg="StartContainer for \"be0b5114c2ebd1a6b810ff2e5c31df8fefe326700e00dedf65289c91ded0968a\"" May 8 23:57:10.674449 systemd[1]: Started cri-containerd-be0b5114c2ebd1a6b810ff2e5c31df8fefe326700e00dedf65289c91ded0968a.scope - libcontainer container be0b5114c2ebd1a6b810ff2e5c31df8fefe326700e00dedf65289c91ded0968a. May 8 23:57:10.698847 containerd[1439]: time="2025-05-08T23:57:10.697850893Z" level=info msg="StartContainer for \"be0b5114c2ebd1a6b810ff2e5c31df8fefe326700e00dedf65289c91ded0968a\" returns successfully" May 8 23:57:11.022357 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 23:57:11.069019 kubelet[2470]: I0508 23:57:11.068728 2470 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T23:57:11Z","lastTransitionTime":"2025-05-08T23:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 23:57:11.626875 kubelet[2470]: E0508 23:57:11.626835 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:11.649552 kubelet[2470]: I0508 23:57:11.649430 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kpwbh" podStartSLOduration=5.649410615 podStartE2EDuration="5.649410615s" podCreationTimestamp="2025-05-08 23:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:57:11.649349575 +0000 UTC m=+82.328285738" watchObservedRunningTime="2025-05-08 23:57:11.649410615 +0000 UTC m=+82.328346778" May 8 23:57:13.305201 kubelet[2470]: E0508 23:57:13.305156 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:13.409949 kubelet[2470]: E0508 23:57:13.409894 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:14.004645 systemd-networkd[1374]: lxc_health: Link UP May 8 23:57:14.022811 systemd-networkd[1374]: lxc_health: Gained carrier May 8 23:57:15.306694 kubelet[2470]: E0508 23:57:15.306659 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:15.633787 kubelet[2470]: E0508 23:57:15.633672 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:15.958503 systemd-networkd[1374]: lxc_health: Gained IPv6LL May 8 23:57:16.635538 kubelet[2470]: E0508 23:57:16.635487 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:17.409301 kubelet[2470]: E0508 23:57:17.409230 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:57:19.935750 systemd[1]: run-containerd-runc-k8s.io-be0b5114c2ebd1a6b810ff2e5c31df8fefe326700e00dedf65289c91ded0968a-runc.63UeQX.mount: Deactivated successfully. May 8 23:57:22.099937 sshd[4299]: pam_unix(sshd:session): session closed for user core May 8 23:57:22.103122 systemd-logind[1422]: Session 26 logged out. Waiting for processes to exit. May 8 23:57:22.103525 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:60254.service: Deactivated successfully. May 8 23:57:22.105304 systemd[1]: session-26.scope: Deactivated successfully. May 8 23:57:22.106211 systemd-logind[1422]: Removed session 26.