May 8 23:51:02.903636 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 23:51:02.903656 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 8 22:24:27 -00 2025 May 8 23:51:02.903666 kernel: KASLR enabled May 8 23:51:02.903671 kernel: efi: EFI v2.7 by EDK II May 8 23:51:02.903677 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 8 23:51:02.903683 kernel: random: crng init done May 8 23:51:02.903689 kernel: secureboot: Secure boot disabled May 8 23:51:02.903695 kernel: ACPI: Early table checksum verification disabled May 8 23:51:02.903701 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 8 23:51:02.903708 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 23:51:02.903714 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903720 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903726 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903732 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903739 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903746 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903753 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903759 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903765 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:51:02.903799 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 23:51:02.903806 kernel: NUMA: Failed to initialise from firmware May 8 23:51:02.903812 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:51:02.903818 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 23:51:02.903824 kernel: Zone ranges: May 8 23:51:02.903831 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:51:02.903839 kernel: DMA32 empty May 8 23:51:02.903845 kernel: Normal empty May 8 23:51:02.903851 kernel: Movable zone start for each node May 8 23:51:02.903857 kernel: Early memory node ranges May 8 23:51:02.903863 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 23:51:02.903869 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 23:51:02.903881 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 23:51:02.903887 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 23:51:02.903893 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 23:51:02.903899 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 23:51:02.903905 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 23:51:02.903912 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:51:02.903919 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 23:51:02.903925 kernel: psci: probing for conduit method from ACPI. May 8 23:51:02.903931 kernel: psci: PSCIv1.1 detected in firmware. May 8 23:51:02.903940 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:51:02.903946 kernel: psci: Trusted OS migration not required May 8 23:51:02.903953 kernel: psci: SMC Calling Convention v1.1 May 8 23:51:02.903961 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 23:51:02.903968 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:51:02.903974 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:51:02.903981 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 23:51:02.903987 kernel: Detected PIPT I-cache on CPU0 May 8 23:51:02.903994 kernel: CPU features: detected: GIC system register CPU interface May 8 23:51:02.904001 kernel: CPU features: detected: Hardware dirty bit management May 8 23:51:02.904007 kernel: CPU features: detected: Spectre-v4 May 8 23:51:02.904014 kernel: CPU features: detected: Spectre-BHB May 8 23:51:02.904021 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 23:51:02.904028 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 23:51:02.904035 kernel: CPU features: detected: ARM erratum 1418040 May 8 23:51:02.904041 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 23:51:02.904048 kernel: alternatives: applying boot alternatives May 8 23:51:02.904056 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:51:02.904063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:51:02.904069 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:51:02.904076 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:51:02.904082 kernel: Fallback order for Node 0: 0 May 8 23:51:02.904089 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 23:51:02.904096 kernel: Policy zone: DMA May 8 23:51:02.904104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:51:02.904110 kernel: software IO TLB: area num 4. May 8 23:51:02.904117 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 23:51:02.904124 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 8 23:51:02.904130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 23:51:02.904137 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:51:02.904144 kernel: rcu: RCU event tracing is enabled. May 8 23:51:02.904151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 23:51:02.904157 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:51:02.904164 kernel: Tracing variant of Tasks RCU enabled. May 8 23:51:02.904170 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:51:02.904177 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 23:51:02.904185 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:51:02.904191 kernel: GICv3: 256 SPIs implemented May 8 23:51:02.904198 kernel: GICv3: 0 Extended SPIs implemented May 8 23:51:02.904204 kernel: Root IRQ handler: gic_handle_irq May 8 23:51:02.904211 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 23:51:02.904217 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 23:51:02.904224 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 23:51:02.904230 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 23:51:02.904237 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 23:51:02.904244 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 23:51:02.904251 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 23:51:02.904258 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:51:02.904265 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:51:02.904271 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 23:51:02.904278 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 23:51:02.904285 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 23:51:02.904292 kernel: arm-pv: using stolen time PV May 8 23:51:02.904299 kernel: Console: colour dummy device 80x25 May 8 23:51:02.904305 kernel: ACPI: Core revision 20230628 May 8 23:51:02.904312 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 23:51:02.904319 kernel: pid_max: default: 32768 minimum: 301 May 8 23:51:02.904327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:51:02.904333 kernel: landlock: Up and running. May 8 23:51:02.904340 kernel: SELinux: Initializing. May 8 23:51:02.904347 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:51:02.904353 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:51:02.904360 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 8 23:51:02.904367 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:51:02.904374 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:51:02.904381 kernel: rcu: Hierarchical SRCU implementation. May 8 23:51:02.904389 kernel: rcu: Max phase no-delay instances is 400. May 8 23:51:02.904395 kernel: Platform MSI: ITS@0x8080000 domain created May 8 23:51:02.904402 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 23:51:02.904409 kernel: Remapping and enabling EFI services. May 8 23:51:02.904416 kernel: smp: Bringing up secondary CPUs ... May 8 23:51:02.904422 kernel: Detected PIPT I-cache on CPU1 May 8 23:51:02.904429 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 23:51:02.904436 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 23:51:02.904443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:51:02.904449 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 23:51:02.904458 kernel: Detected PIPT I-cache on CPU2 May 8 23:51:02.904465 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 23:51:02.904476 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 23:51:02.904484 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:51:02.904491 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 23:51:02.904498 kernel: Detected PIPT I-cache on CPU3 May 8 23:51:02.904506 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 23:51:02.904513 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 23:51:02.904520 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:51:02.904527 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 23:51:02.904535 kernel: smp: Brought up 1 node, 4 CPUs May 8 23:51:02.904542 kernel: SMP: Total of 4 processors activated. May 8 23:51:02.904549 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:51:02.904556 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 23:51:02.904563 kernel: CPU features: detected: Common not Private translations May 8 23:51:02.904570 kernel: CPU features: detected: CRC32 instructions May 8 23:51:02.904577 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 23:51:02.904586 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 23:51:02.904593 kernel: CPU features: detected: LSE atomic instructions May 8 23:51:02.904600 kernel: CPU features: detected: Privileged Access Never May 8 23:51:02.904607 kernel: CPU features: detected: RAS Extension Support May 8 23:51:02.904614 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 23:51:02.904621 kernel: CPU: All CPU(s) started at EL1 May 8 23:51:02.904628 kernel: alternatives: applying system-wide alternatives May 8 23:51:02.904635 kernel: devtmpfs: initialized May 8 23:51:02.904643 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:51:02.904651 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 23:51:02.904658 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:51:02.904665 kernel: SMBIOS 3.0.0 present. May 8 23:51:02.904672 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 8 23:51:02.904679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:51:02.904687 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:51:02.904694 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:51:02.904701 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:51:02.904708 kernel: audit: initializing netlink subsys (disabled) May 8 23:51:02.904717 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 May 8 23:51:02.904724 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:51:02.904731 kernel: cpuidle: using governor menu May 8 23:51:02.904738 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:51:02.904745 kernel: ASID allocator initialised with 32768 entries May 8 23:51:02.904753 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:51:02.904760 kernel: Serial: AMBA PL011 UART driver May 8 23:51:02.904771 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 23:51:02.904783 kernel: Modules: 0 pages in range for non-PLT usage May 8 23:51:02.904792 kernel: Modules: 508944 pages in range for PLT usage May 8 23:51:02.904799 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:51:02.904806 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:51:02.904813 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:51:02.904821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:51:02.904828 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:51:02.904835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:51:02.904842 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:51:02.904850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:51:02.904858 kernel: ACPI: Added _OSI(Module Device) May 8 23:51:02.904878 kernel: ACPI: Added _OSI(Processor Device) May 8 23:51:02.904886 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:51:02.904893 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:51:02.904900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:51:02.904907 kernel: ACPI: Interpreter enabled May 8 23:51:02.904915 kernel: ACPI: Using GIC for interrupt routing May 8 23:51:02.904922 kernel: ACPI: MCFG table detected, 1 entries May 8 23:51:02.904929 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 23:51:02.904937 kernel: printk: console [ttyAMA0] enabled May 8 23:51:02.904944 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 23:51:02.905083 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 23:51:02.905157 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 23:51:02.905225 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 23:51:02.905289 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 23:51:02.905353 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 23:51:02.905365 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 23:51:02.905372 kernel: PCI host bridge to bus 0000:00 May 8 23:51:02.905443 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 23:51:02.905505 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 23:51:02.905565 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 23:51:02.905626 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 23:51:02.905713 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 23:51:02.905826 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 23:51:02.905899 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 23:51:02.905966 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 23:51:02.906033 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:51:02.906100 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:51:02.906168 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 23:51:02.906253 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 23:51:02.906320 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 23:51:02.906380 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 23:51:02.906440 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 23:51:02.906450 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 23:51:02.906457 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 23:51:02.906464 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 23:51:02.906471 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 23:51:02.906478 kernel: iommu: Default domain type: Translated May 8 23:51:02.906488 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:51:02.906495 kernel: efivars: Registered efivars operations May 8 23:51:02.906502 kernel: vgaarb: loaded May 8 23:51:02.906509 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:51:02.906516 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:51:02.906523 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:51:02.906530 kernel: pnp: PnP ACPI init May 8 23:51:02.906601 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 23:51:02.906613 kernel: pnp: PnP ACPI: found 1 devices May 8 23:51:02.906620 kernel: NET: Registered PF_INET protocol family May 8 23:51:02.906628 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:51:02.906635 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:51:02.906643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:51:02.906650 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:51:02.906657 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:51:02.906665 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:51:02.906672 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:51:02.906680 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:51:02.906687 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:51:02.906694 kernel: PCI: CLS 0 bytes, default 64 May 8 23:51:02.906702 kernel: kvm [1]: HYP mode not available May 8 23:51:02.906709 kernel: Initialise system trusted keyrings May 8 23:51:02.906716 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:51:02.906724 kernel: Key type asymmetric registered May 8 23:51:02.906731 kernel: Asymmetric key parser 'x509' registered May 8 23:51:02.906738 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:51:02.906746 kernel: io scheduler mq-deadline registered May 8 23:51:02.906753 kernel: io scheduler kyber registered May 8 23:51:02.906760 kernel: io scheduler bfq registered May 8 23:51:02.906775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 23:51:02.906796 kernel: ACPI: button: Power Button [PWRB] May 8 23:51:02.906803 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 23:51:02.906877 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 23:51:02.906886 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:51:02.906894 kernel: thunder_xcv, ver 1.0 May 8 23:51:02.906903 kernel: thunder_bgx, ver 1.0 May 8 23:51:02.906910 kernel: nicpf, ver 1.0 May 8 23:51:02.906917 kernel: nicvf, ver 1.0 May 8 23:51:02.906995 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:51:02.907060 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:51:02 UTC (1746748262) May 8 23:51:02.907069 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:51:02.907077 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 23:51:02.907084 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:51:02.907094 kernel: watchdog: Hard watchdog permanently disabled May 8 23:51:02.907101 kernel: NET: Registered PF_INET6 protocol family May 8 23:51:02.907108 kernel: Segment Routing with IPv6 May 8 23:51:02.907115 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:51:02.907122 kernel: NET: Registered PF_PACKET protocol family May 8 23:51:02.907130 kernel: Key type dns_resolver registered May 8 23:51:02.907137 kernel: registered taskstats version 1 May 8 23:51:02.907144 kernel: Loading compiled-in X.509 certificates May 8 23:51:02.907151 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: c12e278d643ef0ddd9117a97de150d7afa727d1b' May 8 23:51:02.907159 kernel: Key type .fscrypt registered May 8 23:51:02.907166 kernel: Key type fscrypt-provisioning registered May 8 23:51:02.907174 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:51:02.907181 kernel: ima: Allocated hash algorithm: sha1 May 8 23:51:02.907188 kernel: ima: No architecture policies found May 8 23:51:02.907195 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:51:02.907202 kernel: clk: Disabling unused clocks May 8 23:51:02.907209 kernel: Freeing unused kernel memory: 39744K May 8 23:51:02.907216 kernel: Run /init as init process May 8 23:51:02.907225 kernel: with arguments: May 8 23:51:02.907232 kernel: /init May 8 23:51:02.907238 kernel: with environment: May 8 23:51:02.907245 kernel: HOME=/ May 8 23:51:02.907252 kernel: TERM=linux May 8 23:51:02.907259 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:51:02.907268 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:51:02.907277 systemd[1]: Detected virtualization kvm. May 8 23:51:02.907286 systemd[1]: Detected architecture arm64. May 8 23:51:02.907293 systemd[1]: Running in initrd. May 8 23:51:02.907301 systemd[1]: No hostname configured, using default hostname. May 8 23:51:02.907308 systemd[1]: Hostname set to . May 8 23:51:02.907316 systemd[1]: Initializing machine ID from VM UUID. May 8 23:51:02.907323 systemd[1]: Queued start job for default target initrd.target. May 8 23:51:02.907331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:51:02.907338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:51:02.907348 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:51:02.907355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:51:02.907363 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:51:02.907371 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:51:02.907380 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:51:02.907388 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:51:02.907397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:51:02.907404 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:51:02.907412 systemd[1]: Reached target paths.target - Path Units. May 8 23:51:02.907420 systemd[1]: Reached target slices.target - Slice Units. May 8 23:51:02.907427 systemd[1]: Reached target swap.target - Swaps. May 8 23:51:02.907435 systemd[1]: Reached target timers.target - Timer Units. May 8 23:51:02.907442 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:51:02.907450 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:51:02.907470 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:51:02.907479 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:51:02.907486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:51:02.907494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:51:02.907502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:51:02.907509 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:51:02.907517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:51:02.907524 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:51:02.907532 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:51:02.907539 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:51:02.907549 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:51:02.907556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:51:02.907564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:51:02.907571 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:51:02.907579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:51:02.907586 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:51:02.907596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:51:02.907618 systemd-journald[240]: Collecting audit messages is disabled. May 8 23:51:02.907637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:51:02.907645 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:51:02.907654 systemd-journald[240]: Journal started May 8 23:51:02.907671 systemd-journald[240]: Runtime Journal (/run/log/journal/facf6c3e1d0545e5965f82abaecfa3b2) is 5.9M, max 47.3M, 41.4M free. May 8 23:51:02.900230 systemd-modules-load[241]: Inserted module 'overlay' May 8 23:51:02.910795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:51:02.912453 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:51:02.914899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:51:02.914749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:51:02.918921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:51:02.922310 kernel: Bridge firewalling registered May 8 23:51:02.920284 systemd-modules-load[241]: Inserted module 'br_netfilter' May 8 23:51:02.921346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:51:02.926713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:51:02.927646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:51:02.930821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:51:02.931914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:51:02.934550 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:51:02.936916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:51:02.939067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:51:02.948470 dracut-cmdline[273]: dracut-dracut-053 May 8 23:51:02.950837 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:51:02.964829 systemd-resolved[278]: Positive Trust Anchors: May 8 23:51:02.964902 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:51:02.964933 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:51:02.969526 systemd-resolved[278]: Defaulting to hostname 'linux'. May 8 23:51:02.970446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:51:02.971657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:51:03.016823 kernel: SCSI subsystem initialized May 8 23:51:03.021800 kernel: Loading iSCSI transport class v2.0-870. May 8 23:51:03.028807 kernel: iscsi: registered transport (tcp) May 8 23:51:03.041092 kernel: iscsi: registered transport (qla4xxx) May 8 23:51:03.041111 kernel: QLogic iSCSI HBA Driver May 8 23:51:03.081825 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:51:03.092993 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:51:03.111022 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:51:03.111066 kernel: device-mapper: uevent: version 1.0.3 May 8 23:51:03.112803 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:51:03.156804 kernel: raid6: neonx8 gen() 15700 MB/s May 8 23:51:03.173793 kernel: raid6: neonx4 gen() 15607 MB/s May 8 23:51:03.190792 kernel: raid6: neonx2 gen() 13142 MB/s May 8 23:51:03.207791 kernel: raid6: neonx1 gen() 10431 MB/s May 8 23:51:03.224791 kernel: raid6: int64x8 gen() 6925 MB/s May 8 23:51:03.241792 kernel: raid6: int64x4 gen() 7327 MB/s May 8 23:51:03.258797 kernel: raid6: int64x2 gen() 6101 MB/s May 8 23:51:03.275800 kernel: raid6: int64x1 gen() 5037 MB/s May 8 23:51:03.275829 kernel: raid6: using algorithm neonx8 gen() 15700 MB/s May 8 23:51:03.292802 kernel: raid6: .... xor() 11865 MB/s, rmw enabled May 8 23:51:03.292814 kernel: raid6: using neon recovery algorithm May 8 23:51:03.297793 kernel: xor: measuring software checksum speed May 8 23:51:03.297807 kernel: 8regs : 19793 MB/sec May 8 23:51:03.299105 kernel: 32regs : 18170 MB/sec May 8 23:51:03.299117 kernel: arm64_neon : 27034 MB/sec May 8 23:51:03.299126 kernel: xor: using function: arm64_neon (27034 MB/sec) May 8 23:51:03.348809 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:51:03.358992 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:51:03.367940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:51:03.381596 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 8 23:51:03.385314 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:51:03.388980 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:51:03.400967 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 8 23:51:03.425299 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:51:03.438904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:51:03.475384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:51:03.484914 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:51:03.496630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:51:03.499417 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:51:03.501193 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:51:03.502114 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:51:03.510917 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:51:03.519525 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:51:03.522502 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 23:51:03.523356 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 23:51:03.525022 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 23:51:03.525086 kernel: GPT:9289727 != 19775487 May 8 23:51:03.525098 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 23:51:03.525107 kernel: GPT:9289727 != 19775487 May 8 23:51:03.526217 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 23:51:03.526248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:51:03.535744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:51:03.536078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:51:03.541098 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:51:03.544189 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (520) May 8 23:51:03.542026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:51:03.547100 kernel: BTRFS: device fsid 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 devid 1 transid 43 /dev/vda3 scanned by (udev-worker) (512) May 8 23:51:03.542162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:51:03.543324 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:51:03.552008 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:51:03.563383 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 23:51:03.564439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:51:03.569484 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 23:51:03.573648 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:51:03.579726 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 23:51:03.580657 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 23:51:03.593961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:51:03.595861 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:51:03.599504 disk-uuid[550]: Primary Header is updated. May 8 23:51:03.599504 disk-uuid[550]: Secondary Entries is updated. May 8 23:51:03.599504 disk-uuid[550]: Secondary Header is updated. May 8 23:51:03.603799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:51:03.615979 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:51:04.616800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:51:04.617774 disk-uuid[551]: The operation has completed successfully. May 8 23:51:04.636943 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:51:04.637037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:51:04.656934 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:51:04.662232 sh[571]: Success May 8 23:51:04.676718 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:51:04.721182 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:51:04.722724 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:51:04.723530 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:51:04.734335 kernel: BTRFS info (device dm-0): first mount of filesystem 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 May 8 23:51:04.734366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:51:04.734377 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:51:04.734387 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:51:04.735787 kernel: BTRFS info (device dm-0): using free space tree May 8 23:51:04.738497 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:51:04.739612 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:51:04.749986 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:51:04.751309 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:51:04.758046 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:51:04.758075 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:51:04.758091 kernel: BTRFS info (device vda6): using free space tree May 8 23:51:04.760843 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:51:04.766992 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:51:04.768873 kernel: BTRFS info (device vda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:51:04.774648 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:51:04.785162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:51:04.852994 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:51:04.862947 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:51:04.882324 ignition[658]: Ignition 2.20.0 May 8 23:51:04.882335 ignition[658]: Stage: fetch-offline May 8 23:51:04.882369 ignition[658]: no configs at "/usr/lib/ignition/base.d" May 8 23:51:04.882377 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:51:04.882583 ignition[658]: parsed url from cmdline: "" May 8 23:51:04.882587 ignition[658]: no config URL provided May 8 23:51:04.882591 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:51:04.882598 ignition[658]: no config at "/usr/lib/ignition/user.ign" May 8 23:51:04.882623 ignition[658]: op(1): [started] loading QEMU firmware config module May 8 23:51:04.882627 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 23:51:04.887791 ignition[658]: op(1): [finished] loading QEMU firmware config module May 8 23:51:04.892116 systemd-networkd[763]: lo: Link UP May 8 23:51:04.892128 systemd-networkd[763]: lo: Gained carrier May 8 23:51:04.892848 systemd-networkd[763]: Enumeration completed May 8 23:51:04.892941 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:51:04.893230 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:51:04.893233 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:51:04.893896 systemd-networkd[763]: eth0: Link UP May 8 23:51:04.893899 systemd-networkd[763]: eth0: Gained carrier May 8 23:51:04.893905 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:51:04.894984 systemd[1]: Reached target network.target - Network. May 8 23:51:04.909817 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:51:04.933781 ignition[658]: parsing config with SHA512: 3e56fd250602a9b349da8e5e2953076f8f8ccb572fef99d6122bf86729246413c62ba8680435872a875a7198dd4f2c5babba38370d36413a0948e1065544f115 May 8 23:51:04.938580 unknown[658]: fetched base config from "system" May 8 23:51:04.938588 unknown[658]: fetched user config from "qemu" May 8 23:51:04.940087 ignition[658]: fetch-offline: fetch-offline passed May 8 23:51:04.940184 ignition[658]: Ignition finished successfully May 8 23:51:04.941538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:51:04.942904 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 23:51:04.949943 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:51:04.959850 ignition[770]: Ignition 2.20.0 May 8 23:51:04.959859 ignition[770]: Stage: kargs May 8 23:51:04.960002 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 8 23:51:04.960011 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:51:04.960929 ignition[770]: kargs: kargs passed May 8 23:51:04.960973 ignition[770]: Ignition finished successfully May 8 23:51:04.963134 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:51:04.964675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:51:04.977168 ignition[778]: Ignition 2.20.0 May 8 23:51:04.977177 ignition[778]: Stage: disks May 8 23:51:04.977321 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 8 23:51:04.977330 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:51:04.978252 ignition[778]: disks: disks passed May 8 23:51:04.979638 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:51:04.978293 ignition[778]: Ignition finished successfully May 8 23:51:04.981977 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:51:04.982832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:51:04.984502 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:51:04.985871 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:51:04.987480 systemd[1]: Reached target basic.target - Basic System. May 8 23:51:05.001914 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:51:05.011569 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 23:51:05.014690 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:51:05.023932 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:51:05.063800 kernel: EXT4-fs (vda9): mounted filesystem ad4e3afa-b242-4ca7-a808-1f37a4d41793 r/w with ordered data mode. Quota mode: none. May 8 23:51:05.064249 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:51:05.065282 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:51:05.072869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:51:05.074299 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:51:05.075264 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 23:51:05.075341 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:51:05.075404 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:51:05.081278 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) May 8 23:51:05.080732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:51:05.082516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:51:05.085993 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:51:05.086009 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:51:05.086025 kernel: BTRFS info (device vda6): using free space tree May 8 23:51:05.087797 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:51:05.088666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:51:05.120602 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:51:05.123721 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 8 23:51:05.127623 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:51:05.131131 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:51:05.195843 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:51:05.204865 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:51:05.206165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:51:05.210815 kernel: BTRFS info (device vda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:51:05.224920 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:51:05.226313 ignition[910]: INFO : Ignition 2.20.0 May 8 23:51:05.226313 ignition[910]: INFO : Stage: mount May 8 23:51:05.226313 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:51:05.226313 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:51:05.228921 ignition[910]: INFO : mount: mount passed May 8 23:51:05.228921 ignition[910]: INFO : Ignition finished successfully May 8 23:51:05.229081 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:51:05.234858 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:51:05.732964 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:51:05.742939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:51:05.747791 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 8 23:51:05.749366 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:51:05.749391 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:51:05.749870 kernel: BTRFS info (device vda6): using free space tree May 8 23:51:05.751792 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:51:05.752728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:51:05.767831 ignition[942]: INFO : Ignition 2.20.0 May 8 23:51:05.767831 ignition[942]: INFO : Stage: files May 8 23:51:05.768983 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:51:05.768983 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:51:05.768983 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 8 23:51:05.771400 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:51:05.771400 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:51:05.771400 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:51:05.771400 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:51:05.775313 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:51:05.775313 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 23:51:05.775313 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 23:51:05.775313 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:51:05.775313 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 23:51:05.771540 unknown[942]: wrote ssh authorized keys file for user: core May 8 23:51:06.018677 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 23:51:06.557974 systemd-networkd[763]: eth0: Gained IPv6LL May 8 23:51:06.561152 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:51:06.562705 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:51:06.562705 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 23:51:06.861602 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 8 23:51:06.929875 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:51:06.929875 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:51:06.932412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 23:51:07.224726 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 8 23:51:08.339362 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 23:51:08.339362 ignition[942]: INFO : files: op(d): [started] processing unit "containerd.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 23:51:08.342230 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 23:51:08.342230 ignition[942]: INFO : files: op(d): [finished] processing unit "containerd.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 8 23:51:08.342230 ignition[942]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 8 23:51:08.362466 ignition[942]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:51:08.366080 ignition[942]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:51:08.367222 ignition[942]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 8 23:51:08.367222 ignition[942]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 8 23:51:08.367222 ignition[942]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 8 23:51:08.367222 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:51:08.367222 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:51:08.367222 ignition[942]: INFO : files: files passed May 8 23:51:08.367222 ignition[942]: INFO : Ignition finished successfully May 8 23:51:08.368857 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:51:08.383967 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:51:08.385454 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:51:08.387549 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:51:08.387631 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:51:08.393941 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 8 23:51:08.397021 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:51:08.397021 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:51:08.399318 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:51:08.399558 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:51:08.401842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:51:08.409120 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:51:08.428658 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:51:08.428812 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:51:08.430585 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:51:08.431906 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:51:08.433280 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:51:08.434055 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:51:08.449824 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:51:08.463970 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:51:08.471268 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:51:08.472179 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:51:08.473650 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:51:08.474936 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:51:08.475048 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:51:08.476862 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:51:08.478261 systemd[1]: Stopped target basic.target - Basic System. May 8 23:51:08.479509 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:51:08.480711 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:51:08.482119 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:51:08.483536 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:51:08.484850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:51:08.486437 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:51:08.487831 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:51:08.489262 systemd[1]: Stopped target swap.target - Swaps. May 8 23:51:08.490370 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:51:08.490479 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:51:08.492231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:51:08.493577 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:51:08.494972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:51:08.496393 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:51:08.497310 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:51:08.497414 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:51:08.499505 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:51:08.499616 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:51:08.501035 systemd[1]: Stopped target paths.target - Path Units. May 8 23:51:08.502156 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:51:08.502835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:51:08.503740 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:51:08.504850 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:51:08.506129 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:51:08.506211 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:51:08.507718 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:51:08.507816 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:51:08.508951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:51:08.509049 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:51:08.510290 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:51:08.510383 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:51:08.522941 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:51:08.524241 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:51:08.524872 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:51:08.524980 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:51:08.526299 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:51:08.526388 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:51:08.531598 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:51:08.531692 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:51:08.534167 ignition[998]: INFO : Ignition 2.20.0 May 8 23:51:08.534167 ignition[998]: INFO : Stage: umount May 8 23:51:08.536070 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:51:08.536070 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:51:08.536070 ignition[998]: INFO : umount: umount passed May 8 23:51:08.536070 ignition[998]: INFO : Ignition finished successfully May 8 23:51:08.536534 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:51:08.536626 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:51:08.537827 systemd[1]: Stopped target network.target - Network. May 8 23:51:08.539082 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:51:08.539129 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:51:08.540320 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:51:08.540360 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:51:08.541585 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:51:08.541680 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:51:08.543062 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:51:08.543106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:51:08.544585 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:51:08.545736 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:51:08.547979 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:51:08.549140 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:51:08.549258 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:51:08.551560 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:51:08.551623 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:51:08.552832 systemd-networkd[763]: eth0: DHCPv6 lease lost May 8 23:51:08.554290 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:51:08.554467 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:51:08.555693 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:51:08.555723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:51:08.564920 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:51:08.565571 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:51:08.565627 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:51:08.567143 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:51:08.567184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:51:08.568515 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:51:08.568556 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:51:08.570194 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:51:08.579002 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:51:08.579130 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:51:08.580647 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:51:08.580801 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:51:08.582376 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:51:08.582435 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:51:08.584055 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:51:08.584085 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:51:08.585332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:51:08.585369 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:51:08.587272 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:51:08.587310 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:51:08.589433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:51:08.589480 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:51:08.601993 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:51:08.602827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:51:08.602888 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:51:08.604518 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 23:51:08.604559 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:51:08.606057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:51:08.606096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:51:08.607774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:51:08.607824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:51:08.609710 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:51:08.609812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:51:08.614219 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:51:08.614306 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:51:08.615611 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:51:08.616792 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:51:08.616844 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:51:08.634920 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:51:08.640828 systemd[1]: Switching root. May 8 23:51:08.674311 systemd-journald[240]: Journal stopped May 8 23:51:09.395876 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). May 8 23:51:09.395932 kernel: SELinux: policy capability network_peer_controls=1 May 8 23:51:09.395943 kernel: SELinux: policy capability open_perms=1 May 8 23:51:09.395953 kernel: SELinux: policy capability extended_socket_class=1 May 8 23:51:09.395966 kernel: SELinux: policy capability always_check_network=0 May 8 23:51:09.395975 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 23:51:09.395985 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 23:51:09.395998 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 23:51:09.396007 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 23:51:09.396018 kernel: audit: type=1403 audit(1746748268.863:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 23:51:09.396028 systemd[1]: Successfully loaded SELinux policy in 29.999ms. May 8 23:51:09.396051 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.195ms. May 8 23:51:09.396064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:51:09.396076 systemd[1]: Detected virtualization kvm. May 8 23:51:09.396086 systemd[1]: Detected architecture arm64. May 8 23:51:09.396096 systemd[1]: Detected first boot. May 8 23:51:09.396106 systemd[1]: Initializing machine ID from VM UUID. May 8 23:51:09.396117 zram_generator::config[1065]: No configuration found. May 8 23:51:09.396128 systemd[1]: Populated /etc with preset unit settings. May 8 23:51:09.396138 systemd[1]: Queued start job for default target multi-user.target. May 8 23:51:09.396148 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 23:51:09.396161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 23:51:09.396173 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 23:51:09.396183 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 23:51:09.396198 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 23:51:09.396208 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 23:51:09.396218 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 23:51:09.396229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 23:51:09.396240 systemd[1]: Created slice user.slice - User and Session Slice. May 8 23:51:09.396251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:51:09.396262 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:51:09.396272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 23:51:09.396282 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 23:51:09.396293 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 23:51:09.396304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:51:09.396314 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 23:51:09.396324 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:51:09.396334 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 23:51:09.396346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:51:09.396357 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:51:09.396368 systemd[1]: Reached target slices.target - Slice Units. May 8 23:51:09.396378 systemd[1]: Reached target swap.target - Swaps. May 8 23:51:09.396388 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 23:51:09.396399 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 23:51:09.396410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:51:09.396421 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:51:09.396432 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:51:09.396443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:51:09.396466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:51:09.396476 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 23:51:09.396487 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 23:51:09.396498 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 23:51:09.396508 systemd[1]: Mounting media.mount - External Media Directory... May 8 23:51:09.396518 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 23:51:09.396528 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 23:51:09.396540 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 23:51:09.396551 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 23:51:09.396561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:51:09.396572 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:51:09.396583 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 23:51:09.396593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:51:09.396604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:51:09.396614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:51:09.396625 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 23:51:09.396637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:51:09.396648 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 23:51:09.396658 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 23:51:09.396669 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 8 23:51:09.396678 kernel: loop: module loaded May 8 23:51:09.396688 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:51:09.396698 kernel: fuse: init (API version 7.39) May 8 23:51:09.396707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:51:09.396717 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 23:51:09.396728 kernel: ACPI: bus type drm_connector registered May 8 23:51:09.396738 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 23:51:09.396772 systemd-journald[1143]: Collecting audit messages is disabled. May 8 23:51:09.396802 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:51:09.396813 systemd-journald[1143]: Journal started May 8 23:51:09.396836 systemd-journald[1143]: Runtime Journal (/run/log/journal/facf6c3e1d0545e5965f82abaecfa3b2) is 5.9M, max 47.3M, 41.4M free. May 8 23:51:09.401496 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:51:09.402308 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 23:51:09.403470 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 23:51:09.404661 systemd[1]: Mounted media.mount - External Media Directory. May 8 23:51:09.405713 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 23:51:09.406946 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 23:51:09.408121 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 23:51:09.409384 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 23:51:09.410904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:51:09.412310 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 23:51:09.412470 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 23:51:09.413871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:51:09.414025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:51:09.415314 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:51:09.415466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:51:09.416713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:51:09.416916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:51:09.418306 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 23:51:09.418456 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 23:51:09.419948 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:51:09.420158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:51:09.421541 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:51:09.423434 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 23:51:09.425156 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 23:51:09.436579 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 23:51:09.445929 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 23:51:09.448904 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 23:51:09.450019 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 23:51:09.452680 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 23:51:09.454942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 23:51:09.456140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:51:09.457155 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 23:51:09.458213 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:51:09.461927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:51:09.466379 systemd-journald[1143]: Time spent on flushing to /var/log/journal/facf6c3e1d0545e5965f82abaecfa3b2 is 16.065ms for 850 entries. May 8 23:51:09.466379 systemd-journald[1143]: System Journal (/var/log/journal/facf6c3e1d0545e5965f82abaecfa3b2) is 8.0M, max 195.6M, 187.6M free. May 8 23:51:09.493074 systemd-journald[1143]: Received client request to flush runtime journal. May 8 23:51:09.466985 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:51:09.469917 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:51:09.471224 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 23:51:09.472520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 23:51:09.479978 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 23:51:09.481480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 23:51:09.483283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 23:51:09.492836 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 23:51:09.493221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:51:09.494754 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 23:51:09.503036 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 8 23:51:09.503054 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 8 23:51:09.506934 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:51:09.516000 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 23:51:09.543661 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 23:51:09.557929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:51:09.569908 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 8 23:51:09.569927 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 8 23:51:09.574487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:51:09.899694 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 23:51:09.908080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:51:09.927968 systemd-udevd[1224]: Using default interface naming scheme 'v255'. May 8 23:51:09.944484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:51:09.955673 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:51:09.963060 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 23:51:09.964764 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 8 23:51:09.986799 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1233) May 8 23:51:10.025960 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 23:51:10.038438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:51:10.072030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:51:10.081633 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 23:51:10.087056 systemd-networkd[1231]: lo: Link UP May 8 23:51:10.087063 systemd-networkd[1231]: lo: Gained carrier May 8 23:51:10.087876 systemd-networkd[1231]: Enumeration completed May 8 23:51:10.088303 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:51:10.088307 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:51:10.089052 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 23:51:10.089507 systemd-networkd[1231]: eth0: Link UP May 8 23:51:10.089516 systemd-networkd[1231]: eth0: Gained carrier May 8 23:51:10.089528 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:51:10.090296 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:51:10.093568 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 23:51:10.101857 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:51:10.102812 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:51:10.124381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:51:10.138200 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 23:51:10.139610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:51:10.156011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 23:51:10.159372 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:51:10.186199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 23:51:10.187636 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:51:10.188905 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 23:51:10.188941 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:51:10.189949 systemd[1]: Reached target machines.target - Containers. May 8 23:51:10.192108 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 23:51:10.207939 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 23:51:10.210413 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 23:51:10.211485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:51:10.212407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 23:51:10.214725 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 23:51:10.219978 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 23:51:10.221838 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 23:51:10.227649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 23:51:10.232811 kernel: loop0: detected capacity change from 0 to 194096 May 8 23:51:10.242031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 23:51:10.242664 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 23:51:10.246922 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 23:51:10.278873 kernel: loop1: detected capacity change from 0 to 116808 May 8 23:51:10.313880 kernel: loop2: detected capacity change from 0 to 113536 May 8 23:51:10.361808 kernel: loop3: detected capacity change from 0 to 194096 May 8 23:51:10.367812 kernel: loop4: detected capacity change from 0 to 116808 May 8 23:51:10.372806 kernel: loop5: detected capacity change from 0 to 113536 May 8 23:51:10.375141 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 23:51:10.375525 (sd-merge)[1290]: Merged extensions into '/usr'. May 8 23:51:10.380492 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... May 8 23:51:10.380504 systemd[1]: Reloading... May 8 23:51:10.418807 zram_generator::config[1316]: No configuration found. May 8 23:51:10.471609 ldconfig[1274]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 23:51:10.529664 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:51:10.571686 systemd[1]: Reloading finished in 190 ms. May 8 23:51:10.588597 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 23:51:10.589855 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 23:51:10.604012 systemd[1]: Starting ensure-sysext.service... May 8 23:51:10.605868 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:51:10.608862 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... May 8 23:51:10.608876 systemd[1]: Reloading... May 8 23:51:10.621376 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 23:51:10.621626 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 23:51:10.622274 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 23:51:10.622486 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. May 8 23:51:10.622542 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. May 8 23:51:10.625104 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:51:10.625116 systemd-tmpfiles[1362]: Skipping /boot May 8 23:51:10.631733 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:51:10.631762 systemd-tmpfiles[1362]: Skipping /boot May 8 23:51:10.646598 zram_generator::config[1387]: No configuration found. May 8 23:51:10.737965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:51:10.780156 systemd[1]: Reloading finished in 171 ms. May 8 23:51:10.791404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:51:10.802824 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:51:10.804919 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 23:51:10.806924 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 23:51:10.811952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:51:10.817268 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 23:51:10.835596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:51:10.838898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:51:10.846144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:51:10.853115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:51:10.854103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:51:10.855517 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 23:51:10.858577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:51:10.858729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:51:10.860296 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:51:10.860431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:51:10.862131 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:51:10.862835 augenrules[1465]: No rules May 8 23:51:10.865147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:51:10.866543 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:51:10.866758 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:51:10.868016 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 23:51:10.873110 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 23:51:10.877586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:51:10.886001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:51:10.887884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:51:10.892131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:51:10.892667 systemd-resolved[1435]: Positive Trust Anchors: May 8 23:51:10.892731 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:51:10.892775 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:51:10.893535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:51:10.897594 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 23:51:10.898161 systemd-resolved[1435]: Defaulting to hostname 'linux'. May 8 23:51:10.898546 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:51:10.899602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:51:10.899967 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:51:10.901312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:51:10.901642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:51:10.903035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:51:10.904308 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:51:10.904625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:51:10.908200 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 23:51:10.912178 systemd[1]: Reached target network.target - Network. May 8 23:51:10.913058 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:51:10.919996 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:51:10.920729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:51:10.921900 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:51:10.923611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:51:10.925996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:51:10.930571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:51:10.931477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:51:10.931599 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:51:10.932538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:51:10.932705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:51:10.934487 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:51:10.934621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:51:10.935879 systemd[1]: Finished ensure-sysext.service. May 8 23:51:10.936773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:51:10.936942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:51:10.938079 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:51:10.938289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:51:10.939952 augenrules[1494]: /sbin/augenrules: No change May 8 23:51:10.943711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:51:10.943889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:51:10.945685 augenrules[1526]: No rules May 8 23:51:10.953997 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 23:51:10.955014 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:51:10.955232 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:51:10.996464 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 23:51:10.997708 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 23:51:10.997764 systemd-timesyncd[1531]: Initial clock synchronization to Thu 2025-05-08 23:51:10.925679 UTC. May 8 23:51:10.997927 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:51:10.998752 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 23:51:10.999656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 23:51:11.000663 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 23:51:11.001707 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 23:51:11.001755 systemd[1]: Reached target paths.target - Path Units. May 8 23:51:11.002400 systemd[1]: Reached target time-set.target - System Time Set. May 8 23:51:11.003293 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 23:51:11.004156 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 23:51:11.005044 systemd[1]: Reached target timers.target - Timer Units. May 8 23:51:11.006224 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 23:51:11.008588 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 23:51:11.010813 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 23:51:11.016841 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 23:51:11.017922 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:51:11.018889 systemd[1]: Reached target basic.target - Basic System. May 8 23:51:11.019956 systemd[1]: System is tainted: cgroupsv1 May 8 23:51:11.020012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 23:51:11.020035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 23:51:11.021842 systemd[1]: Starting containerd.service - containerd container runtime... May 8 23:51:11.023971 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 23:51:11.026012 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 23:51:11.030890 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 23:51:11.031625 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 23:51:11.034930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 23:51:11.037168 jq[1539]: false May 8 23:51:11.037659 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 23:51:11.041752 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 23:51:11.050930 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 23:51:11.056589 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 23:51:11.061052 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 23:51:11.062256 extend-filesystems[1541]: Found loop3 May 8 23:51:11.062256 extend-filesystems[1541]: Found loop4 May 8 23:51:11.062256 extend-filesystems[1541]: Found loop5 May 8 23:51:11.062256 extend-filesystems[1541]: Found vda May 8 23:51:11.062256 extend-filesystems[1541]: Found vda1 May 8 23:51:11.062256 extend-filesystems[1541]: Found vda2 May 8 23:51:11.062256 extend-filesystems[1541]: Found vda3 May 8 23:51:11.072954 extend-filesystems[1541]: Found usr May 8 23:51:11.072954 extend-filesystems[1541]: Found vda4 May 8 23:51:11.072954 extend-filesystems[1541]: Found vda6 May 8 23:51:11.072954 extend-filesystems[1541]: Found vda7 May 8 23:51:11.072954 extend-filesystems[1541]: Found vda9 May 8 23:51:11.072954 extend-filesystems[1541]: Checking size of /dev/vda9 May 8 23:51:11.114687 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1238) May 8 23:51:11.063331 systemd[1]: Starting update-engine.service - Update Engine... May 8 23:51:11.076212 dbus-daemon[1538]: [system] SELinux support is enabled May 8 23:51:11.117444 extend-filesystems[1541]: Resized partition /dev/vda9 May 8 23:51:11.070975 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 23:51:11.077009 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 23:51:11.130459 jq[1561]: true May 8 23:51:11.086568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 23:51:11.086880 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 23:51:11.087129 systemd[1]: motdgen.service: Deactivated successfully. May 8 23:51:11.087336 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 23:51:11.092127 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 23:51:11.092346 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 23:51:11.138951 tar[1567]: linux-arm64/helm May 8 23:51:11.140606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 23:51:11.141848 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 23:51:11.143287 jq[1570]: true May 8 23:51:11.143686 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 23:51:11.143709 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 23:51:11.150730 extend-filesystems[1572]: resize2fs 1.47.1 (20-May-2024) May 8 23:51:11.158645 update_engine[1557]: I20250508 23:51:11.157919 1557 main.cc:92] Flatcar Update Engine starting May 8 23:51:11.156287 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 23:51:11.159972 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (Power Button) May 8 23:51:11.160844 update_engine[1557]: I20250508 23:51:11.160516 1557 update_check_scheduler.cc:74] Next update check in 4m28s May 8 23:51:11.160591 systemd[1]: Started update-engine.service - Update Engine. May 8 23:51:11.161991 systemd-logind[1552]: New seat seat0. May 8 23:51:11.163892 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 23:51:11.167165 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 23:51:11.170017 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 23:51:11.175098 systemd[1]: Started systemd-logind.service - User Login Management. May 8 23:51:11.216824 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 23:51:11.230279 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 23:51:11.230279 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 23:51:11.230279 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 23:51:11.237651 extend-filesystems[1541]: Resized filesystem in /dev/vda9 May 8 23:51:11.238360 bash[1599]: Updated "/home/core/.ssh/authorized_keys" May 8 23:51:11.232593 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 23:51:11.232876 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 23:51:11.238411 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 23:51:11.240957 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 23:51:11.245962 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 23:51:11.295122 systemd-networkd[1231]: eth0: Gained IPv6LL May 8 23:51:11.301496 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 23:51:11.303711 systemd[1]: Reached target network-online.target - Network is Online. May 8 23:51:11.313024 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 23:51:11.319720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:11.322345 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 23:51:11.346155 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 23:51:11.352437 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 23:51:11.352735 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 23:51:11.355786 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 23:51:11.383006 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 23:51:11.400138 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 23:51:11.412707 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 23:51:11.414307 systemd[1]: issuegen.service: Deactivated successfully. May 8 23:51:11.414525 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 23:51:11.422706 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 23:51:11.439510 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 23:51:11.462375 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 23:51:11.465036 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 23:51:11.466426 systemd[1]: Reached target getty.target - Login Prompts. May 8 23:51:11.466772 containerd[1571]: time="2025-05-08T23:51:11.466687477Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 23:51:11.490684 containerd[1571]: time="2025-05-08T23:51:11.490635977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.493439 containerd[1571]: time="2025-05-08T23:51:11.493377141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 23:51:11.493439 containerd[1571]: time="2025-05-08T23:51:11.493433668Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 23:51:11.493517 containerd[1571]: time="2025-05-08T23:51:11.493459672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 23:51:11.493799 containerd[1571]: time="2025-05-08T23:51:11.493660608Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 23:51:11.493799 containerd[1571]: time="2025-05-08T23:51:11.493691844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.493799 containerd[1571]: time="2025-05-08T23:51:11.493754080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:51:11.493799 containerd[1571]: time="2025-05-08T23:51:11.493769777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494025933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494052254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494071360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494082460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494214263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494526470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494656768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:51:11.494706 containerd[1571]: time="2025-05-08T23:51:11.494671593Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 23:51:11.494935 containerd[1571]: time="2025-05-08T23:51:11.494750992Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 23:51:11.494935 containerd[1571]: time="2025-05-08T23:51:11.494817073Z" level=info msg="metadata content store policy set" policy=shared May 8 23:51:11.498531 containerd[1571]: time="2025-05-08T23:51:11.498500524Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 23:51:11.498587 containerd[1571]: time="2025-05-08T23:51:11.498550273Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 23:51:11.498587 containerd[1571]: time="2025-05-08T23:51:11.498565495Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 23:51:11.498587 containerd[1571]: time="2025-05-08T23:51:11.498581272Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 23:51:11.498651 containerd[1571]: time="2025-05-08T23:51:11.498594393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 23:51:11.498756 containerd[1571]: time="2025-05-08T23:51:11.498735076Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 23:51:11.499152 containerd[1571]: time="2025-05-08T23:51:11.499106703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 23:51:11.499254 containerd[1571]: time="2025-05-08T23:51:11.499234543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 23:51:11.499290 containerd[1571]: time="2025-05-08T23:51:11.499256107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 23:51:11.499290 containerd[1571]: time="2025-05-08T23:51:11.499270338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 23:51:11.499290 containerd[1571]: time="2025-05-08T23:51:11.499284252Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499342 containerd[1571]: time="2025-05-08T23:51:11.499296580Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499342 containerd[1571]: time="2025-05-08T23:51:11.499309622Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499342 containerd[1571]: time="2025-05-08T23:51:11.499321752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499342 containerd[1571]: time="2025-05-08T23:51:11.499334833Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499407 containerd[1571]: time="2025-05-08T23:51:11.499347399Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499407 containerd[1571]: time="2025-05-08T23:51:11.499359529Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499407 containerd[1571]: time="2025-05-08T23:51:11.499370390Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 23:51:11.499407 containerd[1571]: time="2025-05-08T23:51:11.499390924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499407 containerd[1571]: time="2025-05-08T23:51:11.499404243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499415778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499427631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499439206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499451415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499462633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499474921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499486536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499505 containerd[1571]: time="2025-05-08T23:51:11.499500133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499511153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499523203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499534818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499548375Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499570137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499583020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499637 containerd[1571]: time="2025-05-08T23:51:11.499594040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 23:51:11.499871 containerd[1571]: time="2025-05-08T23:51:11.499851900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 23:51:11.499912 containerd[1571]: time="2025-05-08T23:51:11.499876081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 23:51:11.499912 containerd[1571]: time="2025-05-08T23:51:11.499887259Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 23:51:11.499912 containerd[1571]: time="2025-05-08T23:51:11.499898715Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 23:51:11.499912 containerd[1571]: time="2025-05-08T23:51:11.499907476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 23:51:11.499983 containerd[1571]: time="2025-05-08T23:51:11.499928089Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 23:51:11.499983 containerd[1571]: time="2025-05-08T23:51:11.499939228Z" level=info msg="NRI interface is disabled by configuration." May 8 23:51:11.499983 containerd[1571]: time="2025-05-08T23:51:11.499951318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 23:51:11.500381 containerd[1571]: time="2025-05-08T23:51:11.500300073Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 23:51:11.500496 containerd[1571]: time="2025-05-08T23:51:11.500387044Z" level=info msg="Connect containerd service" May 8 23:51:11.500496 containerd[1571]: time="2025-05-08T23:51:11.500424742Z" level=info msg="using legacy CRI server" May 8 23:51:11.500496 containerd[1571]: time="2025-05-08T23:51:11.500432194Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 23:51:11.500672 containerd[1571]: time="2025-05-08T23:51:11.500652633Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 23:51:11.501302 containerd[1571]: time="2025-05-08T23:51:11.501272964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:51:11.502030 containerd[1571]: time="2025-05-08T23:51:11.502003216Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 23:51:11.502078 containerd[1571]: time="2025-05-08T23:51:11.502054035Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 23:51:11.502281 containerd[1571]: time="2025-05-08T23:51:11.502191983Z" level=info msg="Start subscribing containerd event" May 8 23:51:11.502390 containerd[1571]: time="2025-05-08T23:51:11.502340317Z" level=info msg="Start recovering state" May 8 23:51:11.502497 containerd[1571]: time="2025-05-08T23:51:11.502478146Z" level=info msg="Start event monitor" May 8 23:51:11.502761 containerd[1571]: time="2025-05-08T23:51:11.502615619Z" level=info msg="Start snapshots syncer" May 8 23:51:11.502761 containerd[1571]: time="2025-05-08T23:51:11.502634527Z" level=info msg="Start cni network conf syncer for default" May 8 23:51:11.502761 containerd[1571]: time="2025-05-08T23:51:11.502642416Z" level=info msg="Start streaming server" May 8 23:51:11.503355 systemd[1]: Started containerd.service - containerd container runtime. May 8 23:51:11.504388 containerd[1571]: time="2025-05-08T23:51:11.503391735Z" level=info msg="containerd successfully booted in 0.040099s" May 8 23:51:11.548122 tar[1567]: linux-arm64/LICENSE May 8 23:51:11.548122 tar[1567]: linux-arm64/README.md May 8 23:51:11.560079 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 23:51:11.895261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:11.896455 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 23:51:11.897608 systemd[1]: Startup finished in 6.723s (kernel) + 3.064s (userspace) = 9.787s. May 8 23:51:11.898973 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:51:12.377078 kubelet[1674]: E0508 23:51:12.376974 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:51:12.379731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:51:12.379993 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:51:15.577917 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 23:51:15.592076 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:50092.service - OpenSSH per-connection server daemon (10.0.0.1:50092). May 8 23:51:15.653390 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 50092 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:15.654915 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:15.661809 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 23:51:15.675096 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 23:51:15.677144 systemd-logind[1552]: New session 1 of user core. May 8 23:51:15.685034 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 23:51:15.687018 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 23:51:15.692727 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 23:51:15.766062 systemd[1694]: Queued start job for default target default.target. May 8 23:51:15.766374 systemd[1694]: Created slice app.slice - User Application Slice. May 8 23:51:15.766391 systemd[1694]: Reached target paths.target - Paths. May 8 23:51:15.766401 systemd[1694]: Reached target timers.target - Timers. May 8 23:51:15.775873 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 23:51:15.780939 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 23:51:15.780991 systemd[1694]: Reached target sockets.target - Sockets. May 8 23:51:15.781003 systemd[1694]: Reached target basic.target - Basic System. May 8 23:51:15.781034 systemd[1694]: Reached target default.target - Main User Target. May 8 23:51:15.781056 systemd[1694]: Startup finished in 83ms. May 8 23:51:15.781330 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 23:51:15.782698 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 23:51:15.843036 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:50106.service - OpenSSH per-connection server daemon (10.0.0.1:50106). May 8 23:51:15.877673 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 50106 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:15.878728 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:15.882765 systemd-logind[1552]: New session 2 of user core. May 8 23:51:15.894997 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 23:51:15.945237 sshd[1709]: Connection closed by 10.0.0.1 port 50106 May 8 23:51:15.945519 sshd-session[1706]: pam_unix(sshd:session): session closed for user core May 8 23:51:15.951074 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:50112.service - OpenSSH per-connection server daemon (10.0.0.1:50112). May 8 23:51:15.951438 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:50106.service: Deactivated successfully. May 8 23:51:15.953131 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. May 8 23:51:15.953696 systemd[1]: session-2.scope: Deactivated successfully. May 8 23:51:15.955004 systemd-logind[1552]: Removed session 2. May 8 23:51:15.985771 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 50112 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:15.986953 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:15.990663 systemd-logind[1552]: New session 3 of user core. May 8 23:51:16.002988 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 23:51:16.050565 sshd[1717]: Connection closed by 10.0.0.1 port 50112 May 8 23:51:16.050945 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 8 23:51:16.058994 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:50128.service - OpenSSH per-connection server daemon (10.0.0.1:50128). May 8 23:51:16.059355 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:50112.service: Deactivated successfully. May 8 23:51:16.061725 systemd[1]: session-3.scope: Deactivated successfully. May 8 23:51:16.061859 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. May 8 23:51:16.063269 systemd-logind[1552]: Removed session 3. May 8 23:51:16.093336 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 50128 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:16.094419 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:16.098850 systemd-logind[1552]: New session 4 of user core. May 8 23:51:16.107004 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 23:51:16.157068 sshd[1725]: Connection closed by 10.0.0.1 port 50128 May 8 23:51:16.157488 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 8 23:51:16.165999 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:50130.service - OpenSSH per-connection server daemon (10.0.0.1:50130). May 8 23:51:16.166630 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:50128.service: Deactivated successfully. May 8 23:51:16.168254 systemd[1]: session-4.scope: Deactivated successfully. May 8 23:51:16.168316 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. May 8 23:51:16.169646 systemd-logind[1552]: Removed session 4. May 8 23:51:16.200677 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 50130 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:16.201678 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:16.205812 systemd-logind[1552]: New session 5 of user core. May 8 23:51:16.221012 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 23:51:16.286250 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 23:51:16.286533 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:51:16.299621 sudo[1734]: pam_unix(sudo:session): session closed for user root May 8 23:51:16.301058 sshd[1733]: Connection closed by 10.0.0.1 port 50130 May 8 23:51:16.301455 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 8 23:51:16.309039 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:50132.service - OpenSSH per-connection server daemon (10.0.0.1:50132). May 8 23:51:16.309403 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:50130.service: Deactivated successfully. May 8 23:51:16.311223 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. May 8 23:51:16.311888 systemd[1]: session-5.scope: Deactivated successfully. May 8 23:51:16.313289 systemd-logind[1552]: Removed session 5. May 8 23:51:16.346611 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 50132 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:16.347649 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:16.351838 systemd-logind[1552]: New session 6 of user core. May 8 23:51:16.362006 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 23:51:16.413034 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 23:51:16.413625 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:51:16.417062 sudo[1744]: pam_unix(sudo:session): session closed for user root May 8 23:51:16.421712 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 23:51:16.422017 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:51:16.442269 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:51:16.463987 augenrules[1766]: No rules May 8 23:51:16.465054 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:51:16.465293 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:51:16.467277 sudo[1743]: pam_unix(sudo:session): session closed for user root May 8 23:51:16.468430 sshd[1742]: Connection closed by 10.0.0.1 port 50132 May 8 23:51:16.468883 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 8 23:51:16.480978 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). May 8 23:51:16.481325 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:50132.service: Deactivated successfully. May 8 23:51:16.482852 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. May 8 23:51:16.483535 systemd[1]: session-6.scope: Deactivated successfully. May 8 23:51:16.484758 systemd-logind[1552]: Removed session 6. May 8 23:51:16.515171 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:16.516288 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:16.520132 systemd-logind[1552]: New session 7 of user core. May 8 23:51:16.536137 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 23:51:16.585472 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 23:51:16.585758 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:51:16.895106 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 23:51:16.895249 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 23:51:17.135283 dockerd[1800]: time="2025-05-08T23:51:17.135218353Z" level=info msg="Starting up" May 8 23:51:17.203035 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1221111235-merged.mount: Deactivated successfully. May 8 23:51:17.394206 dockerd[1800]: time="2025-05-08T23:51:17.394167080Z" level=info msg="Loading containers: start." May 8 23:51:17.521820 kernel: Initializing XFRM netlink socket May 8 23:51:17.592598 systemd-networkd[1231]: docker0: Link UP May 8 23:51:17.636894 dockerd[1800]: time="2025-05-08T23:51:17.636841959Z" level=info msg="Loading containers: done." May 8 23:51:17.649889 dockerd[1800]: time="2025-05-08T23:51:17.649845978Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 23:51:17.650000 dockerd[1800]: time="2025-05-08T23:51:17.649935854Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 8 23:51:17.650051 dockerd[1800]: time="2025-05-08T23:51:17.650032900Z" level=info msg="Daemon has completed initialization" May 8 23:51:17.679147 dockerd[1800]: time="2025-05-08T23:51:17.679085958Z" level=info msg="API listen on /run/docker.sock" May 8 23:51:17.679296 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 23:51:18.201045 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2188083665-merged.mount: Deactivated successfully. May 8 23:51:18.571109 containerd[1571]: time="2025-05-08T23:51:18.570998112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 23:51:19.185429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820329952.mount: Deactivated successfully. May 8 23:51:20.913921 containerd[1571]: time="2025-05-08T23:51:20.913867105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:20.914423 containerd[1571]: time="2025-05-08T23:51:20.914379396Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 8 23:51:20.915102 containerd[1571]: time="2025-05-08T23:51:20.915055403Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:20.917927 containerd[1571]: time="2025-05-08T23:51:20.917899013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:20.919136 containerd[1571]: time="2025-05-08T23:51:20.919081766Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.348039005s" May 8 23:51:20.919136 containerd[1571]: time="2025-05-08T23:51:20.919117350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 23:51:20.937405 containerd[1571]: time="2025-05-08T23:51:20.937375206Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 23:51:22.407976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 23:51:22.417044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:22.518169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:22.521672 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:51:22.633458 containerd[1571]: time="2025-05-08T23:51:22.633413503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:22.634414 containerd[1571]: time="2025-05-08T23:51:22.634237513Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 8 23:51:22.635174 containerd[1571]: time="2025-05-08T23:51:22.635120442Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:22.637948 containerd[1571]: time="2025-05-08T23:51:22.637920872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:22.639117 containerd[1571]: time="2025-05-08T23:51:22.639089967Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.70168005s" May 8 23:51:22.639176 containerd[1571]: time="2025-05-08T23:51:22.639118787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 23:51:22.660033 containerd[1571]: time="2025-05-08T23:51:22.659917639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 23:51:22.672110 kubelet[2082]: E0508 23:51:22.672063 2082 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:51:22.675124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:51:22.675301 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:51:23.906856 containerd[1571]: time="2025-05-08T23:51:23.906815833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:23.907830 containerd[1571]: time="2025-05-08T23:51:23.907150865Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 8 23:51:23.908321 containerd[1571]: time="2025-05-08T23:51:23.908297145Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:23.911133 containerd[1571]: time="2025-05-08T23:51:23.911094947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:23.912434 containerd[1571]: time="2025-05-08T23:51:23.912388759Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.252435193s" May 8 23:51:23.912434 containerd[1571]: time="2025-05-08T23:51:23.912423297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 23:51:23.930052 containerd[1571]: time="2025-05-08T23:51:23.930028308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 23:51:24.940623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062222961.mount: Deactivated successfully. May 8 23:51:25.332696 containerd[1571]: time="2025-05-08T23:51:25.332563008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:25.333481 containerd[1571]: time="2025-05-08T23:51:25.333427488Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 8 23:51:25.334211 containerd[1571]: time="2025-05-08T23:51:25.334174211Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:25.336425 containerd[1571]: time="2025-05-08T23:51:25.336388016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:25.337047 containerd[1571]: time="2025-05-08T23:51:25.337012189Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.406957007s" May 8 23:51:25.337087 containerd[1571]: time="2025-05-08T23:51:25.337048059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 23:51:25.355290 containerd[1571]: time="2025-05-08T23:51:25.355260086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 23:51:25.887983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885586172.mount: Deactivated successfully. May 8 23:51:26.526740 containerd[1571]: time="2025-05-08T23:51:26.526683658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:26.527277 containerd[1571]: time="2025-05-08T23:51:26.527230075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 23:51:26.528200 containerd[1571]: time="2025-05-08T23:51:26.528168814Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:26.532807 containerd[1571]: time="2025-05-08T23:51:26.531046517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:26.533322 containerd[1571]: time="2025-05-08T23:51:26.533289632Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.177995153s" May 8 23:51:26.533417 containerd[1571]: time="2025-05-08T23:51:26.533402335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 23:51:26.551275 containerd[1571]: time="2025-05-08T23:51:26.551247253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 23:51:27.052969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784502152.mount: Deactivated successfully. May 8 23:51:27.066188 containerd[1571]: time="2025-05-08T23:51:27.066138117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:27.066627 containerd[1571]: time="2025-05-08T23:51:27.066580247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 8 23:51:27.069297 containerd[1571]: time="2025-05-08T23:51:27.069264714Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:27.075027 containerd[1571]: time="2025-05-08T23:51:27.074972806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:27.075793 containerd[1571]: time="2025-05-08T23:51:27.075740430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 524.458737ms" May 8 23:51:27.075793 containerd[1571]: time="2025-05-08T23:51:27.075774434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 23:51:27.093323 containerd[1571]: time="2025-05-08T23:51:27.093291334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 23:51:27.677235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269849720.mount: Deactivated successfully. May 8 23:51:30.292756 containerd[1571]: time="2025-05-08T23:51:30.292540890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:30.294238 containerd[1571]: time="2025-05-08T23:51:30.294161016Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 8 23:51:30.296810 containerd[1571]: time="2025-05-08T23:51:30.295344294Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:30.300255 containerd[1571]: time="2025-05-08T23:51:30.300208391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:30.301672 containerd[1571]: time="2025-05-08T23:51:30.301641091Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.20831695s" May 8 23:51:30.301812 containerd[1571]: time="2025-05-08T23:51:30.301769640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 23:51:32.908098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 23:51:32.919102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:33.072968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:33.077107 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:51:33.113035 kubelet[2317]: E0508 23:51:33.112979 2317 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:51:33.115891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:51:33.116280 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:51:34.194567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:34.210992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:34.231948 systemd[1]: Reloading requested from client PID 2336 ('systemctl') (unit session-7.scope)... May 8 23:51:34.231965 systemd[1]: Reloading... May 8 23:51:34.294812 zram_generator::config[2382]: No configuration found. May 8 23:51:34.469162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:51:34.520435 systemd[1]: Reloading finished in 288 ms. May 8 23:51:34.554616 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 23:51:34.554677 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 23:51:34.554950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:34.556667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:34.651039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:34.655985 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:51:34.691130 kubelet[2432]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:34.691130 kubelet[2432]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:51:34.691130 kubelet[2432]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:34.691491 kubelet[2432]: I0508 23:51:34.691224 2432 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:51:36.005672 kubelet[2432]: I0508 23:51:36.005626 2432 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 23:51:36.005672 kubelet[2432]: I0508 23:51:36.005662 2432 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:51:36.006076 kubelet[2432]: I0508 23:51:36.005890 2432 server.go:927] "Client rotation is on, will bootstrap in background" May 8 23:51:36.067385 kubelet[2432]: I0508 23:51:36.067342 2432 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:51:36.067638 kubelet[2432]: E0508 23:51:36.067602 2432 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.080575 kubelet[2432]: I0508 23:51:36.080052 2432 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:51:36.081653 kubelet[2432]: I0508 23:51:36.081453 2432 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:51:36.081834 kubelet[2432]: I0508 23:51:36.081659 2432 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 23:51:36.081929 kubelet[2432]: I0508 23:51:36.081901 2432 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:51:36.081929 kubelet[2432]: I0508 23:51:36.081911 2432 container_manager_linux.go:301] "Creating device plugin manager" May 8 23:51:36.082179 kubelet[2432]: I0508 23:51:36.082162 2432 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:36.084824 kubelet[2432]: I0508 23:51:36.084803 2432 kubelet.go:400] "Attempting to sync node with API server" May 8 23:51:36.084869 kubelet[2432]: I0508 23:51:36.084826 2432 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:51:36.085582 kubelet[2432]: I0508 23:51:36.085009 2432 kubelet.go:312] "Adding apiserver pod source" May 8 23:51:36.085582 kubelet[2432]: I0508 23:51:36.085099 2432 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:51:36.085582 kubelet[2432]: W0508 23:51:36.085496 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.085582 kubelet[2432]: E0508 23:51:36.085542 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.085720 kubelet[2432]: W0508 23:51:36.085594 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.085720 kubelet[2432]: E0508 23:51:36.085617 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.086318 kubelet[2432]: I0508 23:51:36.086287 2432 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:51:36.086644 kubelet[2432]: I0508 23:51:36.086633 2432 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:51:36.086872 kubelet[2432]: W0508 23:51:36.086860 2432 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 23:51:36.087598 kubelet[2432]: I0508 23:51:36.087574 2432 server.go:1264] "Started kubelet" May 8 23:51:36.088410 kubelet[2432]: I0508 23:51:36.087759 2432 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:51:36.088410 kubelet[2432]: I0508 23:51:36.087895 2432 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:51:36.088410 kubelet[2432]: I0508 23:51:36.088119 2432 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:51:36.089031 kubelet[2432]: I0508 23:51:36.088916 2432 server.go:455] "Adding debug handlers to kubelet server" May 8 23:51:36.091650 kubelet[2432]: I0508 23:51:36.089692 2432 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:51:36.091650 kubelet[2432]: E0508 23:51:36.089877 2432 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db252145148a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 23:51:36.087554212 +0000 UTC m=+1.428606935,LastTimestamp:2025-05-08 23:51:36.087554212 +0000 UTC m=+1.428606935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 23:51:36.091650 kubelet[2432]: E0508 23:51:36.090821 2432 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:36.091650 kubelet[2432]: I0508 23:51:36.090911 2432 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 23:51:36.091650 kubelet[2432]: I0508 23:51:36.090981 2432 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:51:36.091650 kubelet[2432]: E0508 23:51:36.091238 2432 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" May 8 23:51:36.091650 kubelet[2432]: W0508 23:51:36.091541 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.091930 kubelet[2432]: E0508 23:51:36.091572 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.092383 kubelet[2432]: I0508 23:51:36.092363 2432 factory.go:221] Registration of the systemd container factory successfully May 8 23:51:36.092501 kubelet[2432]: I0508 23:51:36.092475 2432 reconciler.go:26] "Reconciler: start to sync state" May 8 23:51:36.092611 kubelet[2432]: I0508 23:51:36.092581 2432 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:51:36.093430 kubelet[2432]: E0508 23:51:36.093394 2432 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:51:36.093557 kubelet[2432]: I0508 23:51:36.093545 2432 factory.go:221] Registration of the containerd container factory successfully May 8 23:51:36.104844 kubelet[2432]: I0508 23:51:36.104809 2432 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:51:36.105641 kubelet[2432]: I0508 23:51:36.105614 2432 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:51:36.105641 kubelet[2432]: I0508 23:51:36.105641 2432 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:51:36.105725 kubelet[2432]: I0508 23:51:36.105656 2432 kubelet.go:2337] "Starting kubelet main sync loop" May 8 23:51:36.105725 kubelet[2432]: E0508 23:51:36.105691 2432 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:51:36.110697 kubelet[2432]: W0508 23:51:36.110659 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.110834 kubelet[2432]: E0508 23:51:36.110815 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:36.111273 kubelet[2432]: I0508 23:51:36.111246 2432 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:51:36.111273 kubelet[2432]: I0508 23:51:36.111270 2432 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:51:36.111359 kubelet[2432]: I0508 23:51:36.111286 2432 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:36.115185 kubelet[2432]: I0508 23:51:36.115161 2432 policy_none.go:49] "None policy: Start" May 8 23:51:36.115818 kubelet[2432]: I0508 23:51:36.115739 2432 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:51:36.115818 kubelet[2432]: I0508 23:51:36.115764 2432 state_mem.go:35] "Initializing new in-memory state store" May 8 23:51:36.120663 kubelet[2432]: I0508 23:51:36.120637 2432 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:51:36.121400 kubelet[2432]: I0508 23:51:36.120823 2432 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:51:36.121400 kubelet[2432]: I0508 23:51:36.120912 2432 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:51:36.122159 kubelet[2432]: E0508 23:51:36.122144 2432 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 23:51:36.191977 kubelet[2432]: I0508 23:51:36.191955 2432 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 23:51:36.192398 kubelet[2432]: E0508 23:51:36.192363 2432 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" May 8 23:51:36.206600 kubelet[2432]: I0508 23:51:36.206555 2432 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 23:51:36.207523 kubelet[2432]: I0508 23:51:36.207504 2432 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 23:51:36.208365 kubelet[2432]: I0508 23:51:36.208317 2432 topology_manager.go:215] "Topology Admit Handler" podUID="6dee4ce1b0526438a012ea8e0e875f93" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 23:51:36.291799 kubelet[2432]: E0508 23:51:36.291669 2432 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" May 8 23:51:36.293874 kubelet[2432]: I0508 23:51:36.293837 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:36.293874 kubelet[2432]: I0508 23:51:36.293873 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 23:51:36.293949 kubelet[2432]: I0508 23:51:36.293893 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dee4ce1b0526438a012ea8e0e875f93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dee4ce1b0526438a012ea8e0e875f93\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:36.293949 kubelet[2432]: I0508 23:51:36.293910 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dee4ce1b0526438a012ea8e0e875f93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dee4ce1b0526438a012ea8e0e875f93\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:36.293949 kubelet[2432]: I0508 23:51:36.293925 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dee4ce1b0526438a012ea8e0e875f93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6dee4ce1b0526438a012ea8e0e875f93\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:36.293949 kubelet[2432]: I0508 23:51:36.293939 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:36.294038 kubelet[2432]: I0508 23:51:36.293953 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:36.294038 kubelet[2432]: I0508 23:51:36.293966 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:36.294038 kubelet[2432]: I0508 23:51:36.293980 2432 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:36.393761 kubelet[2432]: I0508 23:51:36.393698 2432 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 23:51:36.394242 kubelet[2432]: E0508 23:51:36.394028 2432 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" May 8 23:51:36.511737 kubelet[2432]: E0508 23:51:36.511706 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:36.512419 kubelet[2432]: E0508 23:51:36.512319 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:36.512491 containerd[1571]: time="2025-05-08T23:51:36.512355028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 23:51:36.512818 containerd[1571]: time="2025-05-08T23:51:36.512614826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 23:51:36.514114 kubelet[2432]: E0508 23:51:36.514082 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:36.514418 containerd[1571]: time="2025-05-08T23:51:36.514374384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6dee4ce1b0526438a012ea8e0e875f93,Namespace:kube-system,Attempt:0,}" May 8 23:51:36.692922 kubelet[2432]: E0508 23:51:36.692876 2432 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" May 8 23:51:36.795600 kubelet[2432]: I0508 23:51:36.795547 2432 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 23:51:36.795879 kubelet[2432]: E0508 23:51:36.795855 2432 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" May 8 23:51:37.028395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687850353.mount: Deactivated successfully. May 8 23:51:37.031760 containerd[1571]: time="2025-05-08T23:51:37.031713511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:37.034591 containerd[1571]: time="2025-05-08T23:51:37.034463382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 23:51:37.035356 containerd[1571]: time="2025-05-08T23:51:37.035325861Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:37.036761 containerd[1571]: time="2025-05-08T23:51:37.036698518Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:37.037381 containerd[1571]: time="2025-05-08T23:51:37.037330941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:51:37.038442 containerd[1571]: time="2025-05-08T23:51:37.038398003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:37.039309 containerd[1571]: time="2025-05-08T23:51:37.039269000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:51:37.041121 containerd[1571]: time="2025-05-08T23:51:37.041091570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:37.044403 containerd[1571]: time="2025-05-08T23:51:37.044208859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.779093ms" May 8 23:51:37.045285 containerd[1571]: time="2025-05-08T23:51:37.045258886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.83408ms" May 8 23:51:37.046566 containerd[1571]: time="2025-05-08T23:51:37.046509616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.829491ms" May 8 23:51:37.060274 kubelet[2432]: W0508 23:51:37.060198 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:37.060274 kubelet[2432]: E0508 23:51:37.060272 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:37.178636 containerd[1571]: time="2025-05-08T23:51:37.178540324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:37.178636 containerd[1571]: time="2025-05-08T23:51:37.178603946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:37.179296 containerd[1571]: time="2025-05-08T23:51:37.179118722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:37.179415 containerd[1571]: time="2025-05-08T23:51:37.179355896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:37.181263 containerd[1571]: time="2025-05-08T23:51:37.181131320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:37.181263 containerd[1571]: time="2025-05-08T23:51:37.181195902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:37.181263 containerd[1571]: time="2025-05-08T23:51:37.181211217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:37.181717 containerd[1571]: time="2025-05-08T23:51:37.181601148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:37.183040 containerd[1571]: time="2025-05-08T23:51:37.182962008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:37.183143 containerd[1571]: time="2025-05-08T23:51:37.183114485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:37.183143 containerd[1571]: time="2025-05-08T23:51:37.183132560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:37.183383 containerd[1571]: time="2025-05-08T23:51:37.183312310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:37.221551 containerd[1571]: time="2025-05-08T23:51:37.220972467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"03aeb6da51f70e0aad3aaf59e0ad000de6076ff6c3d47097955fdba78d87f913\"" May 8 23:51:37.222847 kubelet[2432]: E0508 23:51:37.222615 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:37.225360 containerd[1571]: time="2025-05-08T23:51:37.225318293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6dee4ce1b0526438a012ea8e0e875f93,Namespace:kube-system,Attempt:0,} returns sandbox id \"39f9bd2ce750bef35711ce964045c79d8c9ae48bd0762feed06753ef4c684d47\"" May 8 23:51:37.226083 containerd[1571]: time="2025-05-08T23:51:37.226031573Z" level=info msg="CreateContainer within sandbox \"03aeb6da51f70e0aad3aaf59e0ad000de6076ff6c3d47097955fdba78d87f913\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 23:51:37.226370 kubelet[2432]: E0508 23:51:37.226355 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:37.229842 containerd[1571]: time="2025-05-08T23:51:37.229731180Z" level=info msg="CreateContainer within sandbox \"39f9bd2ce750bef35711ce964045c79d8c9ae48bd0762feed06753ef4c684d47\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 23:51:37.229898 containerd[1571]: time="2025-05-08T23:51:37.229863583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"98e3c2c72456598954434eb59ae24978349418c84ab93048ee7b825ed09e307c\"" May 8 23:51:37.230511 kubelet[2432]: E0508 23:51:37.230356 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:37.232229 containerd[1571]: time="2025-05-08T23:51:37.232201489Z" level=info msg="CreateContainer within sandbox \"98e3c2c72456598954434eb59ae24978349418c84ab93048ee7b825ed09e307c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 23:51:37.241465 containerd[1571]: time="2025-05-08T23:51:37.241415995Z" level=info msg="CreateContainer within sandbox \"03aeb6da51f70e0aad3aaf59e0ad000de6076ff6c3d47097955fdba78d87f913\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dad86f466256bf69d1fc339772558cb0e2e120cbcf77788e7f6c67475f85f898\"" May 8 23:51:37.242065 containerd[1571]: time="2025-05-08T23:51:37.242012028Z" level=info msg="StartContainer for \"dad86f466256bf69d1fc339772558cb0e2e120cbcf77788e7f6c67475f85f898\"" May 8 23:51:37.245021 containerd[1571]: time="2025-05-08T23:51:37.244988596Z" level=info msg="CreateContainer within sandbox \"39f9bd2ce750bef35711ce964045c79d8c9ae48bd0762feed06753ef4c684d47\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec74eb34216569a9d35075e54564c148bea8fc3fcc7a74967c58cdbab798bced\"" May 8 23:51:37.245504 containerd[1571]: time="2025-05-08T23:51:37.245475980Z" level=info msg="StartContainer for \"ec74eb34216569a9d35075e54564c148bea8fc3fcc7a74967c58cdbab798bced\"" May 8 23:51:37.247850 containerd[1571]: time="2025-05-08T23:51:37.247804769Z" level=info msg="CreateContainer within sandbox \"98e3c2c72456598954434eb59ae24978349418c84ab93048ee7b825ed09e307c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"40de2e1ba5d3caff16e5b748c0f86582224a205f637cf19eb5e880a901e2e810\"" May 8 23:51:37.248769 containerd[1571]: time="2025-05-08T23:51:37.248327983Z" level=info msg="StartContainer for \"40de2e1ba5d3caff16e5b748c0f86582224a205f637cf19eb5e880a901e2e810\"" May 8 23:51:37.271292 kubelet[2432]: W0508 23:51:37.271141 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:37.271292 kubelet[2432]: E0508 23:51:37.271230 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:37.315234 containerd[1571]: time="2025-05-08T23:51:37.313511329Z" level=info msg="StartContainer for \"dad86f466256bf69d1fc339772558cb0e2e120cbcf77788e7f6c67475f85f898\" returns successfully" May 8 23:51:37.315234 containerd[1571]: time="2025-05-08T23:51:37.313635055Z" level=info msg="StartContainer for \"40de2e1ba5d3caff16e5b748c0f86582224a205f637cf19eb5e880a901e2e810\" returns successfully" May 8 23:51:37.315234 containerd[1571]: time="2025-05-08T23:51:37.313660208Z" level=info msg="StartContainer for \"ec74eb34216569a9d35075e54564c148bea8fc3fcc7a74967c58cdbab798bced\" returns successfully" May 8 23:51:37.315364 kubelet[2432]: W0508 23:51:37.314004 2432 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:37.315364 kubelet[2432]: E0508 23:51:37.314056 2432 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused May 8 23:51:37.493668 kubelet[2432]: E0508 23:51:37.493615 2432 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s" May 8 23:51:37.599174 kubelet[2432]: I0508 23:51:37.598201 2432 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 23:51:38.117620 kubelet[2432]: E0508 23:51:38.117322 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:38.123081 kubelet[2432]: E0508 23:51:38.122803 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:38.125631 kubelet[2432]: E0508 23:51:38.125562 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:39.129358 kubelet[2432]: E0508 23:51:39.129301 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:39.460046 kubelet[2432]: E0508 23:51:39.460013 2432 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 23:51:39.584533 kubelet[2432]: E0508 23:51:39.584425 2432 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183db252145148a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 23:51:36.087554212 +0000 UTC m=+1.428606935,LastTimestamp:2025-05-08 23:51:36.087554212 +0000 UTC m=+1.428606935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 23:51:39.642647 kubelet[2432]: I0508 23:51:39.642535 2432 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 23:51:39.651452 kubelet[2432]: E0508 23:51:39.651383 2432 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:39.752359 kubelet[2432]: E0508 23:51:39.752105 2432 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:40.086207 kubelet[2432]: I0508 23:51:40.086113 2432 apiserver.go:52] "Watching apiserver" May 8 23:51:40.091470 kubelet[2432]: I0508 23:51:40.091434 2432 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:51:40.559367 kubelet[2432]: E0508 23:51:40.559311 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:41.130656 kubelet[2432]: E0508 23:51:41.130627 2432 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:41.612095 systemd[1]: Reloading requested from client PID 2705 ('systemctl') (unit session-7.scope)... May 8 23:51:41.612112 systemd[1]: Reloading... May 8 23:51:41.664820 zram_generator::config[2744]: No configuration found. May 8 23:51:41.836070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:51:41.893680 systemd[1]: Reloading finished in 281 ms. May 8 23:51:41.920511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:41.940805 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:51:41.941139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:41.954025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:42.039916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:42.044880 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:51:42.088456 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:42.088456 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:51:42.088456 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:42.088456 kubelet[2796]: I0508 23:51:42.088404 2796 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:51:42.094831 kubelet[2796]: I0508 23:51:42.094396 2796 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 23:51:42.094831 kubelet[2796]: I0508 23:51:42.094422 2796 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:51:42.094831 kubelet[2796]: I0508 23:51:42.094603 2796 server.go:927] "Client rotation is on, will bootstrap in background" May 8 23:51:42.095978 kubelet[2796]: I0508 23:51:42.095954 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 23:51:42.097328 kubelet[2796]: I0508 23:51:42.097199 2796 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:51:42.102375 kubelet[2796]: I0508 23:51:42.102348 2796 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:51:42.102755 kubelet[2796]: I0508 23:51:42.102726 2796 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:51:42.102942 kubelet[2796]: I0508 23:51:42.102759 2796 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 23:51:42.103019 kubelet[2796]: I0508 23:51:42.102952 2796 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:51:42.103019 kubelet[2796]: I0508 23:51:42.102962 2796 container_manager_linux.go:301] "Creating device plugin manager" May 8 23:51:42.103019 kubelet[2796]: I0508 23:51:42.102993 2796 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:42.103098 kubelet[2796]: I0508 23:51:42.103087 2796 kubelet.go:400] "Attempting to sync node with API server" May 8 23:51:42.103128 kubelet[2796]: I0508 23:51:42.103100 2796 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:51:42.103128 kubelet[2796]: I0508 23:51:42.103125 2796 kubelet.go:312] "Adding apiserver pod source" May 8 23:51:42.103164 kubelet[2796]: I0508 23:51:42.103138 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:51:42.104267 kubelet[2796]: I0508 23:51:42.104022 2796 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:51:42.104267 kubelet[2796]: I0508 23:51:42.104171 2796 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:51:42.105217 kubelet[2796]: I0508 23:51:42.105188 2796 server.go:1264] "Started kubelet" May 8 23:51:42.107341 kubelet[2796]: I0508 23:51:42.107293 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:51:42.108040 kubelet[2796]: I0508 23:51:42.107931 2796 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:51:42.108190 kubelet[2796]: I0508 23:51:42.108157 2796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:51:42.108477 kubelet[2796]: I0508 23:51:42.108464 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:51:42.110424 kubelet[2796]: I0508 23:51:42.110401 2796 server.go:455] "Adding debug handlers to kubelet server" May 8 23:51:42.119355 kubelet[2796]: I0508 23:51:42.119324 2796 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 23:51:42.119562 kubelet[2796]: I0508 23:51:42.119458 2796 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:51:42.119639 kubelet[2796]: I0508 23:51:42.119616 2796 reconciler.go:26] "Reconciler: start to sync state" May 8 23:51:42.122000 kubelet[2796]: E0508 23:51:42.121974 2796 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:51:42.124016 kubelet[2796]: I0508 23:51:42.123993 2796 factory.go:221] Registration of the systemd container factory successfully May 8 23:51:42.124429 kubelet[2796]: I0508 23:51:42.124401 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:51:42.135547 kubelet[2796]: I0508 23:51:42.135525 2796 factory.go:221] Registration of the containerd container factory successfully May 8 23:51:42.137485 kubelet[2796]: I0508 23:51:42.136875 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:51:42.137811 kubelet[2796]: I0508 23:51:42.137757 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:51:42.137876 kubelet[2796]: I0508 23:51:42.137861 2796 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:51:42.137932 kubelet[2796]: I0508 23:51:42.137923 2796 kubelet.go:2337] "Starting kubelet main sync loop" May 8 23:51:42.138032 kubelet[2796]: E0508 23:51:42.138015 2796 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:51:42.173997 kubelet[2796]: I0508 23:51:42.173972 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:51:42.173997 kubelet[2796]: I0508 23:51:42.173991 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:51:42.174150 kubelet[2796]: I0508 23:51:42.174010 2796 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:42.174171 kubelet[2796]: I0508 23:51:42.174159 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 23:51:42.174190 kubelet[2796]: I0508 23:51:42.174169 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 23:51:42.174190 kubelet[2796]: I0508 23:51:42.174187 2796 policy_none.go:49] "None policy: Start" May 8 23:51:42.175262 kubelet[2796]: I0508 23:51:42.175162 2796 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:51:42.175262 kubelet[2796]: I0508 23:51:42.175192 2796 state_mem.go:35] "Initializing new in-memory state store" May 8 23:51:42.175420 kubelet[2796]: I0508 23:51:42.175392 2796 state_mem.go:75] "Updated machine memory state" May 8 23:51:42.176679 kubelet[2796]: I0508 23:51:42.176654 2796 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:51:42.177662 kubelet[2796]: I0508 23:51:42.176918 2796 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:51:42.177662 kubelet[2796]: I0508 23:51:42.177029 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:51:42.221533 kubelet[2796]: I0508 23:51:42.221498 2796 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 23:51:42.228176 kubelet[2796]: I0508 23:51:42.228139 2796 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 23:51:42.228271 kubelet[2796]: I0508 23:51:42.228225 2796 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 23:51:42.239193 kubelet[2796]: I0508 23:51:42.238737 2796 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 23:51:42.239193 kubelet[2796]: I0508 23:51:42.238862 2796 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 23:51:42.239193 kubelet[2796]: I0508 23:51:42.238899 2796 topology_manager.go:215] "Topology Admit Handler" podUID="6dee4ce1b0526438a012ea8e0e875f93" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 23:51:42.247456 kubelet[2796]: E0508 23:51:42.247421 2796 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 23:51:42.319924 kubelet[2796]: I0508 23:51:42.319874 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:42.320023 kubelet[2796]: I0508 23:51:42.319934 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 23:51:42.320023 kubelet[2796]: I0508 23:51:42.319956 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dee4ce1b0526438a012ea8e0e875f93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dee4ce1b0526438a012ea8e0e875f93\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:42.320023 kubelet[2796]: I0508 23:51:42.319982 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dee4ce1b0526438a012ea8e0e875f93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dee4ce1b0526438a012ea8e0e875f93\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:42.320023 kubelet[2796]: I0508 23:51:42.320008 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dee4ce1b0526438a012ea8e0e875f93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6dee4ce1b0526438a012ea8e0e875f93\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:42.320023 kubelet[2796]: I0508 23:51:42.320023 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:42.320154 kubelet[2796]: I0508 23:51:42.320048 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:42.320154 kubelet[2796]: I0508 23:51:42.320065 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:42.320154 kubelet[2796]: I0508 23:51:42.320081 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:42.549035 kubelet[2796]: E0508 23:51:42.548930 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.549148 kubelet[2796]: E0508 23:51:42.549088 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.549522 kubelet[2796]: E0508 23:51:42.549457 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.615962 sudo[2831]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 23:51:42.616226 sudo[2831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 23:51:43.030802 sudo[2831]: pam_unix(sudo:session): session closed for user root May 8 23:51:43.103898 kubelet[2796]: I0508 23:51:43.103859 2796 apiserver.go:52] "Watching apiserver" May 8 23:51:43.121842 kubelet[2796]: I0508 23:51:43.120577 2796 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:51:43.152969 kubelet[2796]: E0508 23:51:43.152535 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:43.154121 kubelet[2796]: E0508 23:51:43.154031 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:43.284147 kubelet[2796]: E0508 23:51:43.283979 2796 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 23:51:43.284147 kubelet[2796]: E0508 23:51:43.284385 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:43.347474 kubelet[2796]: I0508 23:51:43.347389 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.347355856 podStartE2EDuration="3.347355856s" podCreationTimestamp="2025-05-08 23:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:43.277164391 +0000 UTC m=+1.229013297" watchObservedRunningTime="2025-05-08 23:51:43.347355856 +0000 UTC m=+1.299204802" May 8 23:51:43.436062 kubelet[2796]: I0508 23:51:43.435550 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4355322400000001 podStartE2EDuration="1.43553224s" podCreationTimestamp="2025-05-08 23:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:43.434254808 +0000 UTC m=+1.386103754" watchObservedRunningTime="2025-05-08 23:51:43.43553224 +0000 UTC m=+1.387381186" May 8 23:51:43.436062 kubelet[2796]: I0508 23:51:43.435630 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.435625028 podStartE2EDuration="1.435625028s" podCreationTimestamp="2025-05-08 23:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:43.347838433 +0000 UTC m=+1.299687379" watchObservedRunningTime="2025-05-08 23:51:43.435625028 +0000 UTC m=+1.387473934" May 8 23:51:44.152944 kubelet[2796]: E0508 23:51:44.152864 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:44.152944 kubelet[2796]: E0508 23:51:44.152875 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:45.354086 kubelet[2796]: E0508 23:51:45.354046 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:45.610120 sudo[1779]: pam_unix(sudo:session): session closed for user root May 8 23:51:45.611434 sshd[1778]: Connection closed by 10.0.0.1 port 50148 May 8 23:51:45.611868 sshd-session[1772]: pam_unix(sshd:session): session closed for user core May 8 23:51:45.615570 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:50148.service: Deactivated successfully. May 8 23:51:45.617359 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. May 8 23:51:45.617485 systemd[1]: session-7.scope: Deactivated successfully. May 8 23:51:45.618721 systemd-logind[1552]: Removed session 7. May 8 23:51:45.695720 kubelet[2796]: E0508 23:51:45.695545 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:50.476143 kubelet[2796]: E0508 23:51:50.476098 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:51.161234 kubelet[2796]: E0508 23:51:51.161194 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:55.361953 kubelet[2796]: E0508 23:51:55.361917 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:55.704796 kubelet[2796]: E0508 23:51:55.704755 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:55.722180 kubelet[2796]: I0508 23:51:55.722147 2796 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 23:51:55.727875 containerd[1571]: time="2025-05-08T23:51:55.727829670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 23:51:55.728187 kubelet[2796]: I0508 23:51:55.728040 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 23:51:56.110313 update_engine[1557]: I20250508 23:51:56.110152 1557 update_attempter.cc:509] Updating boot flags... May 8 23:51:56.130894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2881) May 8 23:51:56.170724 kubelet[2796]: E0508 23:51:56.170681 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:56.177806 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2880) May 8 23:51:56.213818 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2880) May 8 23:51:56.718892 kubelet[2796]: I0508 23:51:56.718816 2796 topology_manager.go:215] "Topology Admit Handler" podUID="96f3b48a-db44-4c07-badd-ef5de1ed4197" podNamespace="kube-system" podName="kube-proxy-tm4tf" May 8 23:51:56.728311 kubelet[2796]: I0508 23:51:56.728276 2796 topology_manager.go:215] "Topology Admit Handler" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" podNamespace="kube-system" podName="cilium-qz5s8" May 8 23:51:56.769528 kubelet[2796]: I0508 23:51:56.769486 2796 topology_manager.go:215] "Topology Admit Handler" podUID="82e102e7-24b6-45e7-9697-68f1d9c4fcd9" podNamespace="kube-system" podName="cilium-operator-599987898-ndrfn" May 8 23:51:56.826821 kubelet[2796]: I0508 23:51:56.826771 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96f3b48a-db44-4c07-badd-ef5de1ed4197-xtables-lock\") pod \"kube-proxy-tm4tf\" (UID: \"96f3b48a-db44-4c07-badd-ef5de1ed4197\") " pod="kube-system/kube-proxy-tm4tf" May 8 23:51:56.826933 kubelet[2796]: I0508 23:51:56.826856 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96f3b48a-db44-4c07-badd-ef5de1ed4197-lib-modules\") pod \"kube-proxy-tm4tf\" (UID: \"96f3b48a-db44-4c07-badd-ef5de1ed4197\") " pod="kube-system/kube-proxy-tm4tf" May 8 23:51:56.826933 kubelet[2796]: I0508 23:51:56.826877 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-xtables-lock\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.826933 kubelet[2796]: I0508 23:51:56.826896 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-clustermesh-secrets\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.826933 kubelet[2796]: I0508 23:51:56.826912 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hostproc\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.826933 kubelet[2796]: I0508 23:51:56.826927 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-net\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827056 kubelet[2796]: I0508 23:51:56.826941 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hubble-tls\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827056 kubelet[2796]: I0508 23:51:56.826956 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-bpf-maps\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827056 kubelet[2796]: I0508 23:51:56.826973 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-cilium-config-path\") pod \"cilium-operator-599987898-ndrfn\" (UID: \"82e102e7-24b6-45e7-9697-68f1d9c4fcd9\") " pod="kube-system/cilium-operator-599987898-ndrfn" May 8 23:51:56.827056 kubelet[2796]: I0508 23:51:56.826990 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96f3b48a-db44-4c07-badd-ef5de1ed4197-kube-proxy\") pod \"kube-proxy-tm4tf\" (UID: \"96f3b48a-db44-4c07-badd-ef5de1ed4197\") " pod="kube-system/kube-proxy-tm4tf" May 8 23:51:56.827056 kubelet[2796]: I0508 23:51:56.827007 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glm5k\" (UniqueName: \"kubernetes.io/projected/96f3b48a-db44-4c07-badd-ef5de1ed4197-kube-api-access-glm5k\") pod \"kube-proxy-tm4tf\" (UID: \"96f3b48a-db44-4c07-badd-ef5de1ed4197\") " pod="kube-system/kube-proxy-tm4tf" May 8 23:51:56.827192 kubelet[2796]: I0508 23:51:56.827021 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-cgroup\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827192 kubelet[2796]: I0508 23:51:56.827057 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-etc-cni-netd\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827192 kubelet[2796]: I0508 23:51:56.827078 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-config-path\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827192 kubelet[2796]: I0508 23:51:56.827094 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-lib-modules\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827192 kubelet[2796]: I0508 23:51:56.827111 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-run\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827192 kubelet[2796]: I0508 23:51:56.827128 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8sp\" (UniqueName: \"kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-kube-api-access-hn8sp\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827347 kubelet[2796]: I0508 23:51:56.827153 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cni-path\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827347 kubelet[2796]: I0508 23:51:56.827168 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-kernel\") pod \"cilium-qz5s8\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " pod="kube-system/cilium-qz5s8" May 8 23:51:56.827347 kubelet[2796]: I0508 23:51:56.827184 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsfj\" (UniqueName: \"kubernetes.io/projected/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-kube-api-access-rqsfj\") pod \"cilium-operator-599987898-ndrfn\" (UID: \"82e102e7-24b6-45e7-9697-68f1d9c4fcd9\") " pod="kube-system/cilium-operator-599987898-ndrfn" May 8 23:51:57.028466 kubelet[2796]: E0508 23:51:57.028047 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:57.028634 containerd[1571]: time="2025-05-08T23:51:57.028593384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm4tf,Uid:96f3b48a-db44-4c07-badd-ef5de1ed4197,Namespace:kube-system,Attempt:0,}" May 8 23:51:57.034166 kubelet[2796]: E0508 23:51:57.034134 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:57.035582 containerd[1571]: time="2025-05-08T23:51:57.035343727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qz5s8,Uid:39d9cabd-185c-4cc0-8cdc-a7d6d335ae45,Namespace:kube-system,Attempt:0,}" May 8 23:51:57.050935 containerd[1571]: time="2025-05-08T23:51:57.050852799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:57.051228 containerd[1571]: time="2025-05-08T23:51:57.051154071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:57.051297 containerd[1571]: time="2025-05-08T23:51:57.051251109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:57.052913 containerd[1571]: time="2025-05-08T23:51:57.052873546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:57.057918 containerd[1571]: time="2025-05-08T23:51:57.057827616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:57.057918 containerd[1571]: time="2025-05-08T23:51:57.057886655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:57.057918 containerd[1571]: time="2025-05-08T23:51:57.057897934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:57.059001 containerd[1571]: time="2025-05-08T23:51:57.057987092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:57.074798 kubelet[2796]: E0508 23:51:57.074750 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:57.078467 containerd[1571]: time="2025-05-08T23:51:57.078417156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ndrfn,Uid:82e102e7-24b6-45e7-9697-68f1d9c4fcd9,Namespace:kube-system,Attempt:0,}" May 8 23:51:57.093858 containerd[1571]: time="2025-05-08T23:51:57.093817231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qz5s8,Uid:39d9cabd-185c-4cc0-8cdc-a7d6d335ae45,Namespace:kube-system,Attempt:0,} returns sandbox id \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\"" May 8 23:51:57.098988 kubelet[2796]: E0508 23:51:57.098961 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:57.104109 containerd[1571]: time="2025-05-08T23:51:57.103997044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm4tf,Uid:96f3b48a-db44-4c07-badd-ef5de1ed4197,Namespace:kube-system,Attempt:0,} returns sandbox id \"af5882d206b9a2394f15be80f977eed168b42277d7094ff975b1501952d8bb1d\"" May 8 23:51:57.105484 kubelet[2796]: E0508 23:51:57.105443 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:57.106502 containerd[1571]: time="2025-05-08T23:51:57.106452059Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 23:51:57.115519 containerd[1571]: time="2025-05-08T23:51:57.115384625Z" level=info msg="CreateContainer within sandbox \"af5882d206b9a2394f15be80f977eed168b42277d7094ff975b1501952d8bb1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 23:51:57.123371 containerd[1571]: time="2025-05-08T23:51:57.123262578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:57.123531 containerd[1571]: time="2025-05-08T23:51:57.123342736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:57.123531 containerd[1571]: time="2025-05-08T23:51:57.123357735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:57.123531 containerd[1571]: time="2025-05-08T23:51:57.123457933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:57.169576 containerd[1571]: time="2025-05-08T23:51:57.169504204Z" level=info msg="CreateContainer within sandbox \"af5882d206b9a2394f15be80f977eed168b42277d7094ff975b1501952d8bb1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e1a5e60ced625e4d238b95b2a3dc12c8d2ba8f64769dc12709e06abfc03e833\"" May 8 23:51:57.173387 containerd[1571]: time="2025-05-08T23:51:57.172699800Z" level=info msg="StartContainer for \"3e1a5e60ced625e4d238b95b2a3dc12c8d2ba8f64769dc12709e06abfc03e833\"" May 8 23:51:57.183944 containerd[1571]: time="2025-05-08T23:51:57.183435078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ndrfn,Uid:82e102e7-24b6-45e7-9697-68f1d9c4fcd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2\"" May 8 23:51:57.184552 kubelet[2796]: E0508 23:51:57.184517 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:57.250101 containerd[1571]: time="2025-05-08T23:51:57.250063248Z" level=info msg="StartContainer for \"3e1a5e60ced625e4d238b95b2a3dc12c8d2ba8f64769dc12709e06abfc03e833\" returns successfully" May 8 23:51:58.212384 kubelet[2796]: E0508 23:51:58.212060 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:59.196083 kubelet[2796]: E0508 23:51:59.196048 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:59.883125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854668479.mount: Deactivated successfully. May 8 23:52:01.118028 containerd[1571]: time="2025-05-08T23:52:01.117966975Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:52:01.118447 containerd[1571]: time="2025-05-08T23:52:01.118378486Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 23:52:01.119178 containerd[1571]: time="2025-05-08T23:52:01.119144270Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:52:01.121011 containerd[1571]: time="2025-05-08T23:52:01.120974950Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.014483891s" May 8 23:52:01.121057 containerd[1571]: time="2025-05-08T23:52:01.121015789Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 23:52:01.124693 containerd[1571]: time="2025-05-08T23:52:01.124654111Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 23:52:01.124797 containerd[1571]: time="2025-05-08T23:52:01.124662270Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:52:01.151949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2921563319.mount: Deactivated successfully. May 8 23:52:01.153040 containerd[1571]: time="2025-05-08T23:52:01.152991258Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\"" May 8 23:52:01.153712 containerd[1571]: time="2025-05-08T23:52:01.153683203Z" level=info msg="StartContainer for \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\"" May 8 23:52:01.199580 containerd[1571]: time="2025-05-08T23:52:01.199527611Z" level=info msg="StartContainer for \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\" returns successfully" May 8 23:52:01.325797 containerd[1571]: time="2025-05-08T23:52:01.323163898Z" level=info msg="shim disconnected" id=592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b namespace=k8s.io May 8 23:52:01.325797 containerd[1571]: time="2025-05-08T23:52:01.325616365Z" level=warning msg="cleaning up after shim disconnected" id=592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b namespace=k8s.io May 8 23:52:01.325797 containerd[1571]: time="2025-05-08T23:52:01.325628404Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:02.149967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b-rootfs.mount: Deactivated successfully. May 8 23:52:02.167448 kubelet[2796]: I0508 23:52:02.167287 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tm4tf" podStartSLOduration=6.167261122 podStartE2EDuration="6.167261122s" podCreationTimestamp="2025-05-08 23:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:58.249690393 +0000 UTC m=+16.201539339" watchObservedRunningTime="2025-05-08 23:52:02.167261122 +0000 UTC m=+20.119110068" May 8 23:52:02.211188 kubelet[2796]: E0508 23:52:02.210702 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:02.217812 containerd[1571]: time="2025-05-08T23:52:02.217411926Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:52:02.239435 containerd[1571]: time="2025-05-08T23:52:02.239379953Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\"" May 8 23:52:02.239950 containerd[1571]: time="2025-05-08T23:52:02.239919822Z" level=info msg="StartContainer for \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\"" May 8 23:52:02.295849 containerd[1571]: time="2025-05-08T23:52:02.295590033Z" level=info msg="StartContainer for \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\" returns successfully" May 8 23:52:02.314040 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:52:02.314301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:52:02.314363 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 23:52:02.326112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:52:02.346597 containerd[1571]: time="2025-05-08T23:52:02.344744778Z" level=info msg="shim disconnected" id=34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39 namespace=k8s.io May 8 23:52:02.346597 containerd[1571]: time="2025-05-08T23:52:02.344812737Z" level=warning msg="cleaning up after shim disconnected" id=34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39 namespace=k8s.io May 8 23:52:02.346597 containerd[1571]: time="2025-05-08T23:52:02.344820976Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:02.350903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:52:02.557553 containerd[1571]: time="2025-05-08T23:52:02.557509386Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:52:02.558274 containerd[1571]: time="2025-05-08T23:52:02.557940337Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 23:52:02.559078 containerd[1571]: time="2025-05-08T23:52:02.559037475Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:52:02.560737 containerd[1571]: time="2025-05-08T23:52:02.560697280Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.43600409s" May 8 23:52:02.560950 containerd[1571]: time="2025-05-08T23:52:02.560860277Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 23:52:02.563232 containerd[1571]: time="2025-05-08T23:52:02.563202709Z" level=info msg="CreateContainer within sandbox \"2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 23:52:02.571175 containerd[1571]: time="2025-05-08T23:52:02.571144105Z" level=info msg="CreateContainer within sandbox \"2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\"" May 8 23:52:02.573723 containerd[1571]: time="2025-05-08T23:52:02.571829490Z" level=info msg="StartContainer for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\"" May 8 23:52:02.618553 containerd[1571]: time="2025-05-08T23:52:02.618515727Z" level=info msg="StartContainer for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" returns successfully" May 8 23:52:03.150453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39-rootfs.mount: Deactivated successfully. May 8 23:52:03.216634 kubelet[2796]: E0508 23:52:03.214467 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:03.219125 kubelet[2796]: E0508 23:52:03.218850 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:03.219228 containerd[1571]: time="2025-05-08T23:52:03.218936093Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:52:03.238065 containerd[1571]: time="2025-05-08T23:52:03.238013037Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\"" May 8 23:52:03.240332 containerd[1571]: time="2025-05-08T23:52:03.240200034Z" level=info msg="StartContainer for \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\"" May 8 23:52:03.318133 containerd[1571]: time="2025-05-08T23:52:03.318089818Z" level=info msg="StartContainer for \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\" returns successfully" May 8 23:52:03.358793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d-rootfs.mount: Deactivated successfully. May 8 23:52:03.420176 containerd[1571]: time="2025-05-08T23:52:03.420120885Z" level=info msg="shim disconnected" id=99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d namespace=k8s.io May 8 23:52:03.420176 containerd[1571]: time="2025-05-08T23:52:03.420173164Z" level=warning msg="cleaning up after shim disconnected" id=99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d namespace=k8s.io May 8 23:52:03.420176 containerd[1571]: time="2025-05-08T23:52:03.420181444Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:04.222543 kubelet[2796]: E0508 23:52:04.222509 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:04.222543 kubelet[2796]: E0508 23:52:04.222546 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:04.226066 containerd[1571]: time="2025-05-08T23:52:04.225771908Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:52:04.239391 kubelet[2796]: I0508 23:52:04.239188 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ndrfn" podStartSLOduration=2.862881063 podStartE2EDuration="8.239172215s" podCreationTimestamp="2025-05-08 23:51:56 +0000 UTC" firstStartedPulling="2025-05-08 23:51:57.185182672 +0000 UTC m=+15.137031618" lastFinishedPulling="2025-05-08 23:52:02.561473824 +0000 UTC m=+20.513322770" observedRunningTime="2025-05-08 23:52:03.245134816 +0000 UTC m=+21.196983762" watchObservedRunningTime="2025-05-08 23:52:04.239172215 +0000 UTC m=+22.191021161" May 8 23:52:04.249785 containerd[1571]: time="2025-05-08T23:52:04.249741496Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\"" May 8 23:52:04.250457 containerd[1571]: time="2025-05-08T23:52:04.250422123Z" level=info msg="StartContainer for \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\"" May 8 23:52:04.303796 containerd[1571]: time="2025-05-08T23:52:04.303694799Z" level=info msg="StartContainer for \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\" returns successfully" May 8 23:52:04.316172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6-rootfs.mount: Deactivated successfully. May 8 23:52:04.320947 containerd[1571]: time="2025-05-08T23:52:04.320890754Z" level=info msg="shim disconnected" id=ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6 namespace=k8s.io May 8 23:52:04.320947 containerd[1571]: time="2025-05-08T23:52:04.320946833Z" level=warning msg="cleaning up after shim disconnected" id=ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6 namespace=k8s.io May 8 23:52:04.321073 containerd[1571]: time="2025-05-08T23:52:04.320955913Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:05.228961 kubelet[2796]: E0508 23:52:05.228932 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:05.232730 containerd[1571]: time="2025-05-08T23:52:05.232572111Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:52:05.254973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119571128.mount: Deactivated successfully. May 8 23:52:05.258837 containerd[1571]: time="2025-05-08T23:52:05.258709279Z" level=info msg="CreateContainer within sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\"" May 8 23:52:05.259256 containerd[1571]: time="2025-05-08T23:52:05.259234990Z" level=info msg="StartContainer for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\"" May 8 23:52:05.310711 containerd[1571]: time="2025-05-08T23:52:05.310651742Z" level=info msg="StartContainer for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" returns successfully" May 8 23:52:05.416841 kubelet[2796]: I0508 23:52:05.415894 2796 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 23:52:05.438228 kubelet[2796]: I0508 23:52:05.438059 2796 topology_manager.go:215] "Topology Admit Handler" podUID="25c7ce9b-fe04-4e38-a31b-f47b48f76c21" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p7s5n" May 8 23:52:05.442649 kubelet[2796]: I0508 23:52:05.442536 2796 topology_manager.go:215] "Topology Admit Handler" podUID="e945b619-2996-4a78-acb7-b4a4ea112ab6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rjppf" May 8 23:52:05.585695 kubelet[2796]: I0508 23:52:05.585594 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c7ce9b-fe04-4e38-a31b-f47b48f76c21-config-volume\") pod \"coredns-7db6d8ff4d-p7s5n\" (UID: \"25c7ce9b-fe04-4e38-a31b-f47b48f76c21\") " pod="kube-system/coredns-7db6d8ff4d-p7s5n" May 8 23:52:05.586116 kubelet[2796]: I0508 23:52:05.585893 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ng4j\" (UniqueName: \"kubernetes.io/projected/25c7ce9b-fe04-4e38-a31b-f47b48f76c21-kube-api-access-5ng4j\") pod \"coredns-7db6d8ff4d-p7s5n\" (UID: \"25c7ce9b-fe04-4e38-a31b-f47b48f76c21\") " pod="kube-system/coredns-7db6d8ff4d-p7s5n" May 8 23:52:05.586116 kubelet[2796]: I0508 23:52:05.585926 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e945b619-2996-4a78-acb7-b4a4ea112ab6-config-volume\") pod \"coredns-7db6d8ff4d-rjppf\" (UID: \"e945b619-2996-4a78-acb7-b4a4ea112ab6\") " pod="kube-system/coredns-7db6d8ff4d-rjppf" May 8 23:52:05.586116 kubelet[2796]: I0508 23:52:05.585947 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhx6\" (UniqueName: \"kubernetes.io/projected/e945b619-2996-4a78-acb7-b4a4ea112ab6-kube-api-access-2rhx6\") pod \"coredns-7db6d8ff4d-rjppf\" (UID: \"e945b619-2996-4a78-acb7-b4a4ea112ab6\") " pod="kube-system/coredns-7db6d8ff4d-rjppf" May 8 23:52:05.745958 kubelet[2796]: E0508 23:52:05.745862 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:05.747841 containerd[1571]: time="2025-05-08T23:52:05.747804213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7s5n,Uid:25c7ce9b-fe04-4e38-a31b-f47b48f76c21,Namespace:kube-system,Attempt:0,}" May 8 23:52:05.748148 kubelet[2796]: E0508 23:52:05.748120 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:05.748502 containerd[1571]: time="2025-05-08T23:52:05.748441081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjppf,Uid:e945b619-2996-4a78-acb7-b4a4ea112ab6,Namespace:kube-system,Attempt:0,}" May 8 23:52:06.233953 kubelet[2796]: E0508 23:52:06.233895 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:07.235800 kubelet[2796]: E0508 23:52:07.235748 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:07.506398 systemd-networkd[1231]: cilium_host: Link UP May 8 23:52:07.507372 systemd-networkd[1231]: cilium_net: Link UP May 8 23:52:07.507615 systemd-networkd[1231]: cilium_net: Gained carrier May 8 23:52:07.508934 systemd-networkd[1231]: cilium_host: Gained carrier May 8 23:52:07.509108 systemd-networkd[1231]: cilium_net: Gained IPv6LL May 8 23:52:07.509245 systemd-networkd[1231]: cilium_host: Gained IPv6LL May 8 23:52:07.586919 systemd-networkd[1231]: cilium_vxlan: Link UP May 8 23:52:07.586926 systemd-networkd[1231]: cilium_vxlan: Gained carrier May 8 23:52:07.881811 kernel: NET: Registered PF_ALG protocol family May 8 23:52:08.236984 kubelet[2796]: E0508 23:52:08.236957 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:08.448838 systemd-networkd[1231]: lxc_health: Link UP May 8 23:52:08.456856 systemd-networkd[1231]: lxc_health: Gained carrier May 8 23:52:08.828034 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:44214.service - OpenSSH per-connection server daemon (10.0.0.1:44214). May 8 23:52:08.872562 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 44214 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:08.874406 sshd-session[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:08.876446 systemd-networkd[1231]: lxc796f4ee45d28: Link UP May 8 23:52:08.882265 systemd-logind[1552]: New session 8 of user core. May 8 23:52:08.889352 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 23:52:08.890010 kernel: eth0: renamed from tmp02a71 May 8 23:52:08.889982 systemd-networkd[1231]: lxc40c3e85b69c9: Link UP May 8 23:52:08.906157 systemd-networkd[1231]: cilium_vxlan: Gained IPv6LL May 8 23:52:08.906824 kernel: eth0: renamed from tmp94ee4 May 8 23:52:08.907169 systemd-networkd[1231]: lxc796f4ee45d28: Gained carrier May 8 23:52:08.916642 systemd-networkd[1231]: lxc40c3e85b69c9: Gained carrier May 8 23:52:09.042536 sshd[4003]: Connection closed by 10.0.0.1 port 44214 May 8 23:52:09.043553 sshd-session[3994]: pam_unix(sshd:session): session closed for user core May 8 23:52:09.048772 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:44214.service: Deactivated successfully. May 8 23:52:09.055721 systemd[1]: session-8.scope: Deactivated successfully. May 8 23:52:09.056816 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. May 8 23:52:09.061362 systemd-logind[1552]: Removed session 8. May 8 23:52:09.070433 kubelet[2796]: I0508 23:52:09.070373 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qz5s8" podStartSLOduration=9.053101547 podStartE2EDuration="13.070357977s" podCreationTimestamp="2025-05-08 23:51:56 +0000 UTC" firstStartedPulling="2025-05-08 23:51:57.106071229 +0000 UTC m=+15.057920175" lastFinishedPulling="2025-05-08 23:52:01.123327699 +0000 UTC m=+19.075176605" observedRunningTime="2025-05-08 23:52:06.250102897 +0000 UTC m=+24.201951843" watchObservedRunningTime="2025-05-08 23:52:09.070357977 +0000 UTC m=+27.022206923" May 8 23:52:09.246114 kubelet[2796]: E0508 23:52:09.244512 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:10.173948 systemd-networkd[1231]: lxc796f4ee45d28: Gained IPv6LL May 8 23:52:10.174261 systemd-networkd[1231]: lxc_health: Gained IPv6LL May 8 23:52:10.560910 systemd-networkd[1231]: lxc40c3e85b69c9: Gained IPv6LL May 8 23:52:12.449923 containerd[1571]: time="2025-05-08T23:52:12.449829643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:52:12.450646 containerd[1571]: time="2025-05-08T23:52:12.449906882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:52:12.450646 containerd[1571]: time="2025-05-08T23:52:12.450329717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:12.450971 containerd[1571]: time="2025-05-08T23:52:12.450892589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:12.452395 containerd[1571]: time="2025-05-08T23:52:12.452333649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:52:12.452921 containerd[1571]: time="2025-05-08T23:52:12.452885482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:52:12.453067 containerd[1571]: time="2025-05-08T23:52:12.452940081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:12.453067 containerd[1571]: time="2025-05-08T23:52:12.453023040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:12.474468 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:52:12.475354 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:52:12.494945 containerd[1571]: time="2025-05-08T23:52:12.494884070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rjppf,Uid:e945b619-2996-4a78-acb7-b4a4ea112ab6,Namespace:kube-system,Attempt:0,} returns sandbox id \"94ee4c416e17d0ebe9e2cc94f093159523a0a7bdcb76acc17ef991d1bc49fc45\"" May 8 23:52:12.495813 kubelet[2796]: E0508 23:52:12.495682 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:12.499598 containerd[1571]: time="2025-05-08T23:52:12.499546606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7s5n,Uid:25c7ce9b-fe04-4e38-a31b-f47b48f76c21,Namespace:kube-system,Attempt:0,} returns sandbox id \"02a710bdd6569730d802e8e83538e8f5f35d977b29873df19b1659f04226b0be\"" May 8 23:52:12.500029 kubelet[2796]: E0508 23:52:12.500014 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:12.504049 containerd[1571]: time="2025-05-08T23:52:12.504018705Z" level=info msg="CreateContainer within sandbox \"94ee4c416e17d0ebe9e2cc94f093159523a0a7bdcb76acc17ef991d1bc49fc45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:52:12.504970 containerd[1571]: time="2025-05-08T23:52:12.504934453Z" level=info msg="CreateContainer within sandbox \"02a710bdd6569730d802e8e83538e8f5f35d977b29873df19b1659f04226b0be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:52:12.518866 containerd[1571]: time="2025-05-08T23:52:12.518824743Z" level=info msg="CreateContainer within sandbox \"94ee4c416e17d0ebe9e2cc94f093159523a0a7bdcb76acc17ef991d1bc49fc45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1928e699a7d12e1293eddf7d8b8e8c447ad123b8958c22641e0d3a35b30a7329\"" May 8 23:52:12.520180 containerd[1571]: time="2025-05-08T23:52:12.520148885Z" level=info msg="StartContainer for \"1928e699a7d12e1293eddf7d8b8e8c447ad123b8958c22641e0d3a35b30a7329\"" May 8 23:52:12.530319 containerd[1571]: time="2025-05-08T23:52:12.530243708Z" level=info msg="CreateContainer within sandbox \"02a710bdd6569730d802e8e83538e8f5f35d977b29873df19b1659f04226b0be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b009da296f7e8bdff027a2e09e919adf20d6b37efd09daaf2598234a186f70b6\"" May 8 23:52:12.530750 containerd[1571]: time="2025-05-08T23:52:12.530651022Z" level=info msg="StartContainer for \"b009da296f7e8bdff027a2e09e919adf20d6b37efd09daaf2598234a186f70b6\"" May 8 23:52:12.594109 containerd[1571]: time="2025-05-08T23:52:12.594057238Z" level=info msg="StartContainer for \"1928e699a7d12e1293eddf7d8b8e8c447ad123b8958c22641e0d3a35b30a7329\" returns successfully" May 8 23:52:12.594253 containerd[1571]: time="2025-05-08T23:52:12.594146397Z" level=info msg="StartContainer for \"b009da296f7e8bdff027a2e09e919adf20d6b37efd09daaf2598234a186f70b6\" returns successfully" May 8 23:52:13.253207 kubelet[2796]: E0508 23:52:13.252391 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:13.256101 kubelet[2796]: E0508 23:52:13.254908 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:13.280755 kubelet[2796]: I0508 23:52:13.280662 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p7s5n" podStartSLOduration=17.280642655 podStartE2EDuration="17.280642655s" podCreationTimestamp="2025-05-08 23:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:52:13.265026501 +0000 UTC m=+31.216875447" watchObservedRunningTime="2025-05-08 23:52:13.280642655 +0000 UTC m=+31.232491601" May 8 23:52:13.294225 kubelet[2796]: I0508 23:52:13.294150 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rjppf" podStartSLOduration=17.294130958 podStartE2EDuration="17.294130958s" podCreationTimestamp="2025-05-08 23:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:52:13.292832655 +0000 UTC m=+31.244681601" watchObservedRunningTime="2025-05-08 23:52:13.294130958 +0000 UTC m=+31.245979864" May 8 23:52:14.057424 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:57524.service - OpenSSH per-connection server daemon (10.0.0.1:57524). May 8 23:52:14.107669 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 57524 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:14.108843 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:14.115080 systemd-logind[1552]: New session 9 of user core. May 8 23:52:14.124060 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 23:52:14.259604 kubelet[2796]: E0508 23:52:14.259279 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:14.260773 kubelet[2796]: E0508 23:52:14.260754 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:14.265088 sshd[4204]: Connection closed by 10.0.0.1 port 57524 May 8 23:52:14.265175 sshd-session[4201]: pam_unix(sshd:session): session closed for user core May 8 23:52:14.269953 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:57524.service: Deactivated successfully. May 8 23:52:14.273184 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. May 8 23:52:14.273311 systemd[1]: session-9.scope: Deactivated successfully. May 8 23:52:14.274601 systemd-logind[1552]: Removed session 9. May 8 23:52:15.262208 kubelet[2796]: E0508 23:52:15.260972 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:19.289036 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:57532.service - OpenSSH per-connection server daemon (10.0.0.1:57532). May 8 23:52:19.324549 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 57532 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:19.325870 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:19.330050 systemd-logind[1552]: New session 10 of user core. May 8 23:52:19.337045 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 23:52:19.449582 sshd[4220]: Connection closed by 10.0.0.1 port 57532 May 8 23:52:19.449985 sshd-session[4217]: pam_unix(sshd:session): session closed for user core May 8 23:52:19.455770 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:57532.service: Deactivated successfully. May 8 23:52:19.457877 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. May 8 23:52:19.457921 systemd[1]: session-10.scope: Deactivated successfully. May 8 23:52:19.459137 systemd-logind[1552]: Removed session 10. May 8 23:52:24.468184 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). May 8 23:52:24.507068 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:24.508192 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:24.512007 systemd-logind[1552]: New session 11 of user core. May 8 23:52:24.521036 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 23:52:24.630563 sshd[4237]: Connection closed by 10.0.0.1 port 46010 May 8 23:52:24.632156 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 8 23:52:24.643141 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:46024.service - OpenSSH per-connection server daemon (10.0.0.1:46024). May 8 23:52:24.643504 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:46010.service: Deactivated successfully. May 8 23:52:24.646269 systemd[1]: session-11.scope: Deactivated successfully. May 8 23:52:24.646724 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. May 8 23:52:24.648046 systemd-logind[1552]: Removed session 11. May 8 23:52:24.681802 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 46024 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:24.682954 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:24.686750 systemd-logind[1552]: New session 12 of user core. May 8 23:52:24.696991 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 23:52:24.849822 sshd[4254]: Connection closed by 10.0.0.1 port 46024 May 8 23:52:24.850449 sshd-session[4248]: pam_unix(sshd:session): session closed for user core May 8 23:52:24.857106 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:46030.service - OpenSSH per-connection server daemon (10.0.0.1:46030). May 8 23:52:24.857472 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:46024.service: Deactivated successfully. May 8 23:52:24.860361 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. May 8 23:52:24.860385 systemd[1]: session-12.scope: Deactivated successfully. May 8 23:52:24.868190 systemd-logind[1552]: Removed session 12. May 8 23:52:24.908253 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 46030 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:24.909246 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:24.913143 systemd-logind[1552]: New session 13 of user core. May 8 23:52:24.921049 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 23:52:25.031886 sshd[4268]: Connection closed by 10.0.0.1 port 46030 May 8 23:52:25.032228 sshd-session[4262]: pam_unix(sshd:session): session closed for user core May 8 23:52:25.035486 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:46030.service: Deactivated successfully. May 8 23:52:25.037804 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. May 8 23:52:25.038214 systemd[1]: session-13.scope: Deactivated successfully. May 8 23:52:25.039715 systemd-logind[1552]: Removed session 13. May 8 23:52:26.668354 kubelet[2796]: I0508 23:52:26.667926 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 23:52:26.669187 kubelet[2796]: E0508 23:52:26.668765 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:27.286696 kubelet[2796]: E0508 23:52:27.286620 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:30.048049 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:46034.service - OpenSSH per-connection server daemon (10.0.0.1:46034). May 8 23:52:30.083973 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 46034 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:30.085195 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:30.088908 systemd-logind[1552]: New session 14 of user core. May 8 23:52:30.104087 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 23:52:30.212167 sshd[4287]: Connection closed by 10.0.0.1 port 46034 May 8 23:52:30.212513 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 8 23:52:30.224038 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:46038.service - OpenSSH per-connection server daemon (10.0.0.1:46038). May 8 23:52:30.224449 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:46034.service: Deactivated successfully. May 8 23:52:30.226231 systemd[1]: session-14.scope: Deactivated successfully. May 8 23:52:30.227444 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. May 8 23:52:30.228383 systemd-logind[1552]: Removed session 14. May 8 23:52:30.263177 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 46038 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:30.264348 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:30.268325 systemd-logind[1552]: New session 15 of user core. May 8 23:52:30.278025 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 23:52:30.471978 sshd[4302]: Connection closed by 10.0.0.1 port 46038 May 8 23:52:30.471452 sshd-session[4297]: pam_unix(sshd:session): session closed for user core May 8 23:52:30.482105 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:46050.service - OpenSSH per-connection server daemon (10.0.0.1:46050). May 8 23:52:30.482466 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:46038.service: Deactivated successfully. May 8 23:52:30.485173 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. May 8 23:52:30.485261 systemd[1]: session-15.scope: Deactivated successfully. May 8 23:52:30.486521 systemd-logind[1552]: Removed session 15. May 8 23:52:30.522043 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 46050 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:30.523220 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:30.527626 systemd-logind[1552]: New session 16 of user core. May 8 23:52:30.540042 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 23:52:31.748446 sshd[4315]: Connection closed by 10.0.0.1 port 46050 May 8 23:52:31.750362 sshd-session[4309]: pam_unix(sshd:session): session closed for user core May 8 23:52:31.758174 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:46052.service - OpenSSH per-connection server daemon (10.0.0.1:46052). May 8 23:52:31.759054 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:46050.service: Deactivated successfully. May 8 23:52:31.763616 systemd[1]: session-16.scope: Deactivated successfully. May 8 23:52:31.765543 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. May 8 23:52:31.769002 systemd-logind[1552]: Removed session 16. May 8 23:52:31.809659 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 46052 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:31.810941 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:31.815802 systemd-logind[1552]: New session 17 of user core. May 8 23:52:31.822166 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 23:52:32.042898 sshd[4337]: Connection closed by 10.0.0.1 port 46052 May 8 23:52:32.043533 sshd-session[4331]: pam_unix(sshd:session): session closed for user core May 8 23:52:32.052030 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:46064.service - OpenSSH per-connection server daemon (10.0.0.1:46064). May 8 23:52:32.052417 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:46052.service: Deactivated successfully. May 8 23:52:32.059289 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. May 8 23:52:32.059847 systemd[1]: session-17.scope: Deactivated successfully. May 8 23:52:32.061937 systemd-logind[1552]: Removed session 17. May 8 23:52:32.087462 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 46064 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:32.088677 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:32.092700 systemd-logind[1552]: New session 18 of user core. May 8 23:52:32.101067 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 23:52:32.208992 sshd[4351]: Connection closed by 10.0.0.1 port 46064 May 8 23:52:32.209325 sshd-session[4345]: pam_unix(sshd:session): session closed for user core May 8 23:52:32.211660 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:46064.service: Deactivated successfully. May 8 23:52:32.215097 systemd[1]: session-18.scope: Deactivated successfully. May 8 23:52:32.217242 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. May 8 23:52:32.218532 systemd-logind[1552]: Removed session 18. May 8 23:52:37.224070 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:60740.service - OpenSSH per-connection server daemon (10.0.0.1:60740). May 8 23:52:37.261422 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 60740 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:37.262730 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:37.266994 systemd-logind[1552]: New session 19 of user core. May 8 23:52:37.276072 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 23:52:37.389023 sshd[4369]: Connection closed by 10.0.0.1 port 60740 May 8 23:52:37.389375 sshd-session[4366]: pam_unix(sshd:session): session closed for user core May 8 23:52:37.393664 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. May 8 23:52:37.393715 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:60740.service: Deactivated successfully. May 8 23:52:37.395573 systemd[1]: session-19.scope: Deactivated successfully. May 8 23:52:37.396451 systemd-logind[1552]: Removed session 19. May 8 23:52:42.410129 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:60756.service - OpenSSH per-connection server daemon (10.0.0.1:60756). May 8 23:52:42.457394 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 60756 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:42.460238 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:42.467214 systemd-logind[1552]: New session 20 of user core. May 8 23:52:42.473432 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 23:52:42.617775 sshd[4386]: Connection closed by 10.0.0.1 port 60756 May 8 23:52:42.618146 sshd-session[4383]: pam_unix(sshd:session): session closed for user core May 8 23:52:42.621561 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:60756.service: Deactivated successfully. May 8 23:52:42.624421 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. May 8 23:52:42.624990 systemd[1]: session-20.scope: Deactivated successfully. May 8 23:52:42.627795 systemd-logind[1552]: Removed session 20. May 8 23:52:47.640077 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:58112.service - OpenSSH per-connection server daemon (10.0.0.1:58112). May 8 23:52:47.682755 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 58112 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:47.683179 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:47.689961 systemd-logind[1552]: New session 21 of user core. May 8 23:52:47.700449 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 23:52:47.829734 sshd[4402]: Connection closed by 10.0.0.1 port 58112 May 8 23:52:47.830248 sshd-session[4399]: pam_unix(sshd:session): session closed for user core May 8 23:52:47.840053 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). May 8 23:52:47.840453 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:58112.service: Deactivated successfully. May 8 23:52:47.843059 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. May 8 23:52:47.843533 systemd[1]: session-21.scope: Deactivated successfully. May 8 23:52:47.845691 systemd-logind[1552]: Removed session 21. May 8 23:52:47.877712 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:47.879533 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:47.884370 systemd-logind[1552]: New session 22 of user core. May 8 23:52:47.896163 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 23:52:50.003900 containerd[1571]: time="2025-05-08T23:52:50.003800834Z" level=info msg="StopContainer for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" with timeout 30 (s)" May 8 23:52:50.006000 containerd[1571]: time="2025-05-08T23:52:50.005234981Z" level=info msg="Stop container \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" with signal terminated" May 8 23:52:50.037537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d-rootfs.mount: Deactivated successfully. May 8 23:52:50.049605 containerd[1571]: time="2025-05-08T23:52:50.048733122Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:52:50.051279 containerd[1571]: time="2025-05-08T23:52:50.051223569Z" level=info msg="shim disconnected" id=30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d namespace=k8s.io May 8 23:52:50.051279 containerd[1571]: time="2025-05-08T23:52:50.051273330Z" level=warning msg="cleaning up after shim disconnected" id=30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d namespace=k8s.io May 8 23:52:50.051279 containerd[1571]: time="2025-05-08T23:52:50.051281371Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:50.056198 containerd[1571]: time="2025-05-08T23:52:50.056164463Z" level=info msg="StopContainer for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" with timeout 2 (s)" May 8 23:52:50.056410 containerd[1571]: time="2025-05-08T23:52:50.056389627Z" level=info msg="Stop container \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" with signal terminated" May 8 23:52:50.062020 systemd-networkd[1231]: lxc_health: Link DOWN May 8 23:52:50.062027 systemd-networkd[1231]: lxc_health: Lost carrier May 8 23:52:50.107387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a-rootfs.mount: Deactivated successfully. May 8 23:52:50.122946 containerd[1571]: time="2025-05-08T23:52:50.122883002Z" level=info msg="StopContainer for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" returns successfully" May 8 23:52:50.141566 containerd[1571]: time="2025-05-08T23:52:50.141529954Z" level=info msg="StopPodSandbox for \"2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2\"" May 8 23:52:50.141721 containerd[1571]: time="2025-05-08T23:52:50.141578035Z" level=info msg="Container to stop \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:50.143392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2-shm.mount: Deactivated successfully. May 8 23:52:50.171472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2-rootfs.mount: Deactivated successfully. May 8 23:52:50.174980 containerd[1571]: time="2025-05-08T23:52:50.174762062Z" level=info msg="shim disconnected" id=c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a namespace=k8s.io May 8 23:52:50.174980 containerd[1571]: time="2025-05-08T23:52:50.174976746Z" level=warning msg="cleaning up after shim disconnected" id=c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a namespace=k8s.io May 8 23:52:50.175144 containerd[1571]: time="2025-05-08T23:52:50.174988546Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:50.238070 containerd[1571]: time="2025-05-08T23:52:50.238018536Z" level=info msg="StopContainer for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" returns successfully" May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238427023Z" level=info msg="shim disconnected" id=2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2 namespace=k8s.io May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238470904Z" level=warning msg="cleaning up after shim disconnected" id=2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2 namespace=k8s.io May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238478704Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238908712Z" level=info msg="StopPodSandbox for \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\"" May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238937713Z" level=info msg="Container to stop \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238948833Z" level=info msg="Container to stop \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238957033Z" level=info msg="Container to stop \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238964673Z" level=info msg="Container to stop \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:50.239797 containerd[1571]: time="2025-05-08T23:52:50.238976434Z" level=info msg="Container to stop \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:50.256978 containerd[1571]: time="2025-05-08T23:52:50.256860091Z" level=info msg="TearDown network for sandbox \"2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2\" successfully" May 8 23:52:50.256978 containerd[1571]: time="2025-05-08T23:52:50.256897532Z" level=info msg="StopPodSandbox for \"2479588bc7e4c5d5eb788560b91160f178cf93505f99e0a8d546b3ec1614d0e2\" returns successfully" May 8 23:52:50.307375 containerd[1571]: time="2025-05-08T23:52:50.307316444Z" level=info msg="shim disconnected" id=842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0 namespace=k8s.io May 8 23:52:50.307931 containerd[1571]: time="2025-05-08T23:52:50.307718251Z" level=warning msg="cleaning up after shim disconnected" id=842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0 namespace=k8s.io May 8 23:52:50.307931 containerd[1571]: time="2025-05-08T23:52:50.307734812Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:50.318356 containerd[1571]: time="2025-05-08T23:52:50.318306411Z" level=info msg="TearDown network for sandbox \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" successfully" May 8 23:52:50.318356 containerd[1571]: time="2025-05-08T23:52:50.318344852Z" level=info msg="StopPodSandbox for \"842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0\" returns successfully" May 8 23:52:50.335590 kubelet[2796]: I0508 23:52:50.335532 2796 scope.go:117] "RemoveContainer" containerID="30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d" May 8 23:52:50.337198 containerd[1571]: time="2025-05-08T23:52:50.336831401Z" level=info msg="RemoveContainer for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\"" May 8 23:52:50.348082 containerd[1571]: time="2025-05-08T23:52:50.348015892Z" level=info msg="RemoveContainer for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" returns successfully" May 8 23:52:50.348499 kubelet[2796]: I0508 23:52:50.348391 2796 scope.go:117] "RemoveContainer" containerID="30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d" May 8 23:52:50.349239 containerd[1571]: time="2025-05-08T23:52:50.349134073Z" level=error msg="ContainerStatus for \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\": not found" May 8 23:52:50.351539 kubelet[2796]: E0508 23:52:50.351478 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\": not found" containerID="30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d" May 8 23:52:50.351625 kubelet[2796]: I0508 23:52:50.351531 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d"} err="failed to get container status \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\": rpc error: code = NotFound desc = an error occurred when try to find container \"30795d0721850c91c10477e7c34561a15fbf5936b1746eea3a5ac46f230ec98d\": not found" May 8 23:52:50.351625 kubelet[2796]: I0508 23:52:50.351616 2796 scope.go:117] "RemoveContainer" containerID="c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a" May 8 23:52:50.353592 containerd[1571]: time="2025-05-08T23:52:50.353536836Z" level=info msg="RemoveContainer for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\"" May 8 23:52:50.359310 kubelet[2796]: I0508 23:52:50.359138 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqsfj\" (UniqueName: \"kubernetes.io/projected/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-kube-api-access-rqsfj\") pod \"82e102e7-24b6-45e7-9697-68f1d9c4fcd9\" (UID: \"82e102e7-24b6-45e7-9697-68f1d9c4fcd9\") " May 8 23:52:50.359310 kubelet[2796]: I0508 23:52:50.359183 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-cilium-config-path\") pod \"82e102e7-24b6-45e7-9697-68f1d9c4fcd9\" (UID: \"82e102e7-24b6-45e7-9697-68f1d9c4fcd9\") " May 8 23:52:50.370181 kubelet[2796]: I0508 23:52:50.370139 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-kube-api-access-rqsfj" (OuterVolumeSpecName: "kube-api-access-rqsfj") pod "82e102e7-24b6-45e7-9697-68f1d9c4fcd9" (UID: "82e102e7-24b6-45e7-9697-68f1d9c4fcd9"). InnerVolumeSpecName "kube-api-access-rqsfj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:52:50.371018 containerd[1571]: time="2025-05-08T23:52:50.370978886Z" level=info msg="RemoveContainer for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" returns successfully" May 8 23:52:50.371085 kubelet[2796]: I0508 23:52:50.370983 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82e102e7-24b6-45e7-9697-68f1d9c4fcd9" (UID: "82e102e7-24b6-45e7-9697-68f1d9c4fcd9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 23:52:50.371203 kubelet[2796]: I0508 23:52:50.371182 2796 scope.go:117] "RemoveContainer" containerID="ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6" May 8 23:52:50.372232 containerd[1571]: time="2025-05-08T23:52:50.372200469Z" level=info msg="RemoveContainer for \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\"" May 8 23:52:50.377056 containerd[1571]: time="2025-05-08T23:52:50.377020720Z" level=info msg="RemoveContainer for \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\" returns successfully" May 8 23:52:50.377387 kubelet[2796]: I0508 23:52:50.377336 2796 scope.go:117] "RemoveContainer" containerID="99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d" May 8 23:52:50.378359 containerd[1571]: time="2025-05-08T23:52:50.378312984Z" level=info msg="RemoveContainer for \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\"" May 8 23:52:50.382770 containerd[1571]: time="2025-05-08T23:52:50.382726547Z" level=info msg="RemoveContainer for \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\" returns successfully" May 8 23:52:50.383025 kubelet[2796]: I0508 23:52:50.382979 2796 scope.go:117] "RemoveContainer" containerID="34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39" May 8 23:52:50.384127 containerd[1571]: time="2025-05-08T23:52:50.384097933Z" level=info msg="RemoveContainer for \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\"" May 8 23:52:50.389878 containerd[1571]: time="2025-05-08T23:52:50.389835521Z" level=info msg="RemoveContainer for \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\" returns successfully" May 8 23:52:50.390105 kubelet[2796]: I0508 23:52:50.390079 2796 scope.go:117] "RemoveContainer" containerID="592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b" May 8 23:52:50.391210 containerd[1571]: time="2025-05-08T23:52:50.391183547Z" level=info msg="RemoveContainer for \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\"" May 8 23:52:50.395984 containerd[1571]: time="2025-05-08T23:52:50.395942917Z" level=info msg="RemoveContainer for \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\" returns successfully" May 8 23:52:50.396244 kubelet[2796]: I0508 23:52:50.396224 2796 scope.go:117] "RemoveContainer" containerID="c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a" May 8 23:52:50.396611 containerd[1571]: time="2025-05-08T23:52:50.396564008Z" level=error msg="ContainerStatus for \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\": not found" May 8 23:52:50.396890 kubelet[2796]: E0508 23:52:50.396740 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\": not found" containerID="c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a" May 8 23:52:50.396890 kubelet[2796]: I0508 23:52:50.396801 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a"} err="failed to get container status \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7a048b9da40c5ab660252bdf53eb146a8bb42c18e943d3b794793c010154a7a\": not found" May 8 23:52:50.396890 kubelet[2796]: I0508 23:52:50.396826 2796 scope.go:117] "RemoveContainer" containerID="ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6" May 8 23:52:50.397038 containerd[1571]: time="2025-05-08T23:52:50.397002417Z" level=error msg="ContainerStatus for \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\": not found" May 8 23:52:50.397143 kubelet[2796]: E0508 23:52:50.397124 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\": not found" containerID="ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6" May 8 23:52:50.397176 kubelet[2796]: I0508 23:52:50.397150 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6"} err="failed to get container status \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac948df655ee2ace36a900829a77eb6b326df00fd200c05a37038ed1a4b687f6\": not found" May 8 23:52:50.397176 kubelet[2796]: I0508 23:52:50.397169 2796 scope.go:117] "RemoveContainer" containerID="99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d" May 8 23:52:50.397341 containerd[1571]: time="2025-05-08T23:52:50.397303222Z" level=error msg="ContainerStatus for \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\": not found" May 8 23:52:50.397556 kubelet[2796]: E0508 23:52:50.397445 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\": not found" containerID="99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d" May 8 23:52:50.397556 kubelet[2796]: I0508 23:52:50.397472 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d"} err="failed to get container status \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"99da6f6a01e07c8e569a813b33edacb754c7322a226f3bd8f31bda9ceae3cf7d\": not found" May 8 23:52:50.397556 kubelet[2796]: I0508 23:52:50.397486 2796 scope.go:117] "RemoveContainer" containerID="34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39" May 8 23:52:50.397651 containerd[1571]: time="2025-05-08T23:52:50.397609748Z" level=error msg="ContainerStatus for \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\": not found" May 8 23:52:50.397945 kubelet[2796]: E0508 23:52:50.397741 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\": not found" containerID="34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39" May 8 23:52:50.397945 kubelet[2796]: I0508 23:52:50.397866 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39"} err="failed to get container status \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\": rpc error: code = NotFound desc = an error occurred when try to find container \"34641d5c5d2a6f61bd8c76e526c224539226536fcea37be08c06983bc1b50c39\": not found" May 8 23:52:50.397945 kubelet[2796]: I0508 23:52:50.397885 2796 scope.go:117] "RemoveContainer" containerID="592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b" May 8 23:52:50.398049 containerd[1571]: time="2025-05-08T23:52:50.398011796Z" level=error msg="ContainerStatus for \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\": not found" May 8 23:52:50.398143 kubelet[2796]: E0508 23:52:50.398117 2796 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\": not found" containerID="592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b" May 8 23:52:50.398174 kubelet[2796]: I0508 23:52:50.398141 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b"} err="failed to get container status \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\": rpc error: code = NotFound desc = an error occurred when try to find container \"592fc09910feb4bd1055ad89448a36d2a14142461f7a4c82ec76dbfb7315218b\": not found" May 8 23:52:50.459530 kubelet[2796]: I0508 23:52:50.459476 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8sp\" (UniqueName: \"kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-kube-api-access-hn8sp\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459530 kubelet[2796]: I0508 23:52:50.459531 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-kernel\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459530 kubelet[2796]: I0508 23:52:50.459552 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-lib-modules\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459728 kubelet[2796]: I0508 23:52:50.459568 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-bpf-maps\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459728 kubelet[2796]: I0508 23:52:50.459584 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hubble-tls\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459728 kubelet[2796]: I0508 23:52:50.459598 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-cgroup\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459728 kubelet[2796]: I0508 23:52:50.459613 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-etc-cni-netd\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459728 kubelet[2796]: I0508 23:52:50.459629 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-xtables-lock\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459728 kubelet[2796]: I0508 23:52:50.459642 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cni-path\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459886 kubelet[2796]: I0508 23:52:50.459662 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-clustermesh-secrets\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459886 kubelet[2796]: I0508 23:52:50.459676 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-net\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459886 kubelet[2796]: I0508 23:52:50.459689 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-run\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459886 kubelet[2796]: I0508 23:52:50.459706 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-config-path\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459886 kubelet[2796]: I0508 23:52:50.459726 2796 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hostproc\") pod \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\" (UID: \"39d9cabd-185c-4cc0-8cdc-a7d6d335ae45\") " May 8 23:52:50.459886 kubelet[2796]: I0508 23:52:50.459757 2796 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rqsfj\" (UniqueName: \"kubernetes.io/projected/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-kube-api-access-rqsfj\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.460016 kubelet[2796]: I0508 23:52:50.459768 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82e102e7-24b6-45e7-9697-68f1d9c4fcd9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.460016 kubelet[2796]: I0508 23:52:50.459844 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hostproc" (OuterVolumeSpecName: "hostproc") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.461830 kubelet[2796]: I0508 23:52:50.460115 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.461830 kubelet[2796]: I0508 23:52:50.460153 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.461830 kubelet[2796]: I0508 23:52:50.460170 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.461830 kubelet[2796]: I0508 23:52:50.460190 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.461830 kubelet[2796]: I0508 23:52:50.460440 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.462016 kubelet[2796]: I0508 23:52:50.460468 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.462016 kubelet[2796]: I0508 23:52:50.460482 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.462016 kubelet[2796]: I0508 23:52:50.461579 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.462016 kubelet[2796]: I0508 23:52:50.461618 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cni-path" (OuterVolumeSpecName: "cni-path") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:50.462743 kubelet[2796]: I0508 23:52:50.462652 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 23:52:50.462743 kubelet[2796]: I0508 23:52:50.462670 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:52:50.462743 kubelet[2796]: I0508 23:52:50.462697 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 23:52:50.463151 kubelet[2796]: I0508 23:52:50.463114 2796 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-kube-api-access-hn8sp" (OuterVolumeSpecName: "kube-api-access-hn8sp") pod "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" (UID: "39d9cabd-185c-4cc0-8cdc-a7d6d335ae45"). InnerVolumeSpecName "kube-api-access-hn8sp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560305 2796 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560348 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560366 2796 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560382 2796 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560396 2796 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560413 2796 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560426 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560445 kubelet[2796]: I0508 23:52:50.560439 2796 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560718 kubelet[2796]: I0508 23:52:50.560453 2796 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560718 kubelet[2796]: I0508 23:52:50.560467 2796 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560718 kubelet[2796]: I0508 23:52:50.560481 2796 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hn8sp\" (UniqueName: \"kubernetes.io/projected/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-kube-api-access-hn8sp\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560718 kubelet[2796]: I0508 23:52:50.560496 2796 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560718 kubelet[2796]: I0508 23:52:50.560513 2796 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 23:52:50.560718 kubelet[2796]: I0508 23:52:50.560520 2796 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 23:52:51.027432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0-rootfs.mount: Deactivated successfully. May 8 23:52:51.027586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-842b2ea4b0acf8f1c137ebb3047bce1dc1de3e5d5aa92af1e422ca7ebd6f94e0-shm.mount: Deactivated successfully. May 8 23:52:51.027675 systemd[1]: var-lib-kubelet-pods-39d9cabd\x2d185c\x2d4cc0\x2d8cdc\x2da7d6d335ae45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhn8sp.mount: Deactivated successfully. May 8 23:52:51.027755 systemd[1]: var-lib-kubelet-pods-82e102e7\x2d24b6\x2d45e7\x2d9697\x2d68f1d9c4fcd9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqsfj.mount: Deactivated successfully. May 8 23:52:51.027858 systemd[1]: var-lib-kubelet-pods-39d9cabd\x2d185c\x2d4cc0\x2d8cdc\x2da7d6d335ae45-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 23:52:51.027938 systemd[1]: var-lib-kubelet-pods-39d9cabd\x2d185c\x2d4cc0\x2d8cdc\x2da7d6d335ae45-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 23:52:51.997208 sshd[4417]: Connection closed by 10.0.0.1 port 58128 May 8 23:52:51.997553 sshd-session[4411]: pam_unix(sshd:session): session closed for user core May 8 23:52:52.007010 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:58140.service - OpenSSH per-connection server daemon (10.0.0.1:58140). May 8 23:52:52.007363 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:58128.service: Deactivated successfully. May 8 23:52:52.011491 systemd[1]: session-22.scope: Deactivated successfully. May 8 23:52:52.013182 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. May 8 23:52:52.014438 systemd-logind[1552]: Removed session 22. May 8 23:52:52.048025 sshd[4578]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:52.049268 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:52.053353 systemd-logind[1552]: New session 23 of user core. May 8 23:52:52.066017 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 23:52:52.140588 kubelet[2796]: I0508 23:52:52.140550 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" path="/var/lib/kubelet/pods/39d9cabd-185c-4cc0-8cdc-a7d6d335ae45/volumes" May 8 23:52:52.141122 kubelet[2796]: I0508 23:52:52.141102 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e102e7-24b6-45e7-9697-68f1d9c4fcd9" path="/var/lib/kubelet/pods/82e102e7-24b6-45e7-9697-68f1d9c4fcd9/volumes" May 8 23:52:52.197575 kubelet[2796]: E0508 23:52:52.197495 2796 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:52:53.072813 sshd[4584]: Connection closed by 10.0.0.1 port 58140 May 8 23:52:53.073191 sshd-session[4578]: pam_unix(sshd:session): session closed for user core May 8 23:52:53.084238 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:42782.service - OpenSSH per-connection server daemon (10.0.0.1:42782). May 8 23:52:53.085405 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:58140.service: Deactivated successfully. May 8 23:52:53.089957 systemd[1]: session-23.scope: Deactivated successfully. May 8 23:52:53.091391 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. May 8 23:52:53.101120 kubelet[2796]: I0508 23:52:53.100492 2796 topology_manager.go:215] "Topology Admit Handler" podUID="4511ec56-0a1b-4c7b-bcae-04d5fa025e2e" podNamespace="kube-system" podName="cilium-9g6bb" May 8 23:52:53.101120 kubelet[2796]: E0508 23:52:53.100627 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" containerName="cilium-agent" May 8 23:52:53.101120 kubelet[2796]: E0508 23:52:53.100638 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" containerName="mount-cgroup" May 8 23:52:53.101120 kubelet[2796]: E0508 23:52:53.100643 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" containerName="apply-sysctl-overwrites" May 8 23:52:53.101120 kubelet[2796]: E0508 23:52:53.100649 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" containerName="mount-bpf-fs" May 8 23:52:53.101120 kubelet[2796]: E0508 23:52:53.100654 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" containerName="clean-cilium-state" May 8 23:52:53.101120 kubelet[2796]: E0508 23:52:53.100660 2796 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82e102e7-24b6-45e7-9697-68f1d9c4fcd9" containerName="cilium-operator" May 8 23:52:53.101120 kubelet[2796]: I0508 23:52:53.100680 2796 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e102e7-24b6-45e7-9697-68f1d9c4fcd9" containerName="cilium-operator" May 8 23:52:53.101120 kubelet[2796]: I0508 23:52:53.100686 2796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d9cabd-185c-4cc0-8cdc-a7d6d335ae45" containerName="cilium-agent" May 8 23:52:53.114159 systemd-logind[1552]: Removed session 23. May 8 23:52:53.146410 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 42782 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:53.147233 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:53.151073 systemd-logind[1552]: New session 24 of user core. May 8 23:52:53.161085 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 23:52:53.216019 sshd[4598]: Connection closed by 10.0.0.1 port 42782 May 8 23:52:53.216583 sshd-session[4592]: pam_unix(sshd:session): session closed for user core May 8 23:52:53.225053 systemd[1]: Started sshd@24-10.0.0.44:22-10.0.0.1:42784.service - OpenSSH per-connection server daemon (10.0.0.1:42784). May 8 23:52:53.225441 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:42782.service: Deactivated successfully. May 8 23:52:53.227752 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. May 8 23:52:53.228809 systemd[1]: session-24.scope: Deactivated successfully. May 8 23:52:53.230612 systemd-logind[1552]: Removed session 24. May 8 23:52:53.265314 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 42784 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:53.266573 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:53.270280 systemd-logind[1552]: New session 25 of user core. May 8 23:52:53.277294 kubelet[2796]: I0508 23:52:53.277255 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-hubble-tls\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.285977 kubelet[2796]: I0508 23:52:53.277297 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-cilium-cgroup\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.285977 kubelet[2796]: I0508 23:52:53.277320 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-cilium-ipsec-secrets\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.285977 kubelet[2796]: I0508 23:52:53.277336 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-cni-path\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.285977 kubelet[2796]: I0508 23:52:53.277350 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-etc-cni-netd\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.285977 kubelet[2796]: I0508 23:52:53.277365 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-host-proc-sys-net\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.285977 kubelet[2796]: I0508 23:52:53.277382 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-xtables-lock\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286109 kubelet[2796]: I0508 23:52:53.277396 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-cilium-run\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286109 kubelet[2796]: I0508 23:52:53.277412 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-lib-modules\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286109 kubelet[2796]: I0508 23:52:53.277427 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-hostproc\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286109 kubelet[2796]: I0508 23:52:53.277441 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-host-proc-sys-kernel\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286109 kubelet[2796]: I0508 23:52:53.277456 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-clustermesh-secrets\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286109 kubelet[2796]: I0508 23:52:53.277470 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-cilium-config-path\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286108 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 23:52:53.286275 kubelet[2796]: I0508 23:52:53.277486 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkm7\" (UniqueName: \"kubernetes.io/projected/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-kube-api-access-vnkm7\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.286275 kubelet[2796]: I0508 23:52:53.277504 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4511ec56-0a1b-4c7b-bcae-04d5fa025e2e-bpf-maps\") pod \"cilium-9g6bb\" (UID: \"4511ec56-0a1b-4c7b-bcae-04d5fa025e2e\") " pod="kube-system/cilium-9g6bb" May 8 23:52:53.432416 kubelet[2796]: E0508 23:52:53.432383 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:53.432996 containerd[1571]: time="2025-05-08T23:52:53.432965321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9g6bb,Uid:4511ec56-0a1b-4c7b-bcae-04d5fa025e2e,Namespace:kube-system,Attempt:0,}" May 8 23:52:53.493044 kubelet[2796]: I0508 23:52:53.492206 2796 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T23:52:53Z","lastTransitionTime":"2025-05-08T23:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 23:52:53.539921 containerd[1571]: time="2025-05-08T23:52:53.536741154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:52:53.539921 containerd[1571]: time="2025-05-08T23:52:53.536834195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:52:53.539921 containerd[1571]: time="2025-05-08T23:52:53.536850996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:53.539921 containerd[1571]: time="2025-05-08T23:52:53.536951517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:53.584644 containerd[1571]: time="2025-05-08T23:52:53.584607962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9g6bb,Uid:4511ec56-0a1b-4c7b-bcae-04d5fa025e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\"" May 8 23:52:53.585408 kubelet[2796]: E0508 23:52:53.585379 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:53.587400 containerd[1571]: time="2025-05-08T23:52:53.587309288Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:52:53.762708 containerd[1571]: time="2025-05-08T23:52:53.762534088Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1337b7eb35bc885ccfcd24abfe13faef36975e741914c29c942da968e0bc75f2\"" May 8 23:52:53.763179 containerd[1571]: time="2025-05-08T23:52:53.763131538Z" level=info msg="StartContainer for \"1337b7eb35bc885ccfcd24abfe13faef36975e741914c29c942da968e0bc75f2\"" May 8 23:52:53.847255 containerd[1571]: time="2025-05-08T23:52:53.844163826Z" level=info msg="StartContainer for \"1337b7eb35bc885ccfcd24abfe13faef36975e741914c29c942da968e0bc75f2\" returns successfully" May 8 23:52:53.993806 containerd[1571]: time="2025-05-08T23:52:53.993610231Z" level=info msg="shim disconnected" id=1337b7eb35bc885ccfcd24abfe13faef36975e741914c29c942da968e0bc75f2 namespace=k8s.io May 8 23:52:53.993806 containerd[1571]: time="2025-05-08T23:52:53.993668912Z" level=warning msg="cleaning up after shim disconnected" id=1337b7eb35bc885ccfcd24abfe13faef36975e741914c29c942da968e0bc75f2 namespace=k8s.io May 8 23:52:53.993806 containerd[1571]: time="2025-05-08T23:52:53.993676432Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:54.351515 kubelet[2796]: E0508 23:52:54.351459 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:54.354322 containerd[1571]: time="2025-05-08T23:52:54.354196222Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:52:54.392574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345973108.mount: Deactivated successfully. May 8 23:52:54.406001 containerd[1571]: time="2025-05-08T23:52:54.405953704Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38\"" May 8 23:52:54.406651 containerd[1571]: time="2025-05-08T23:52:54.406620995Z" level=info msg="StartContainer for \"246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38\"" May 8 23:52:54.452814 containerd[1571]: time="2025-05-08T23:52:54.452681785Z" level=info msg="StartContainer for \"246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38\" returns successfully" May 8 23:52:54.476108 containerd[1571]: time="2025-05-08T23:52:54.475899202Z" level=info msg="shim disconnected" id=246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38 namespace=k8s.io May 8 23:52:54.476108 containerd[1571]: time="2025-05-08T23:52:54.475953323Z" level=warning msg="cleaning up after shim disconnected" id=246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38 namespace=k8s.io May 8 23:52:54.476108 containerd[1571]: time="2025-05-08T23:52:54.475961323Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:55.360287 kubelet[2796]: E0508 23:52:55.359603 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:55.373417 containerd[1571]: time="2025-05-08T23:52:55.365033051Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:52:55.383490 systemd[1]: run-containerd-runc-k8s.io-246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38-runc.MLteX0.mount: Deactivated successfully. May 8 23:52:55.383645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-246247b35b67663626c782ad5272fa88ca2e1a991d8288b48e6f55bcfc1fbd38-rootfs.mount: Deactivated successfully. May 8 23:52:55.388118 containerd[1571]: time="2025-05-08T23:52:55.387988171Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"478d625ebe88efbe9fddb6320b93007aae6bbf052f502bbc39d1e21853b36ddf\"" May 8 23:52:55.389885 containerd[1571]: time="2025-05-08T23:52:55.388427538Z" level=info msg="StartContainer for \"478d625ebe88efbe9fddb6320b93007aae6bbf052f502bbc39d1e21853b36ddf\"" May 8 23:52:55.462735 containerd[1571]: time="2025-05-08T23:52:55.462682821Z" level=info msg="StartContainer for \"478d625ebe88efbe9fddb6320b93007aae6bbf052f502bbc39d1e21853b36ddf\" returns successfully" May 8 23:52:55.488396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-478d625ebe88efbe9fddb6320b93007aae6bbf052f502bbc39d1e21853b36ddf-rootfs.mount: Deactivated successfully. May 8 23:52:55.497987 containerd[1571]: time="2025-05-08T23:52:55.497929134Z" level=info msg="shim disconnected" id=478d625ebe88efbe9fddb6320b93007aae6bbf052f502bbc39d1e21853b36ddf namespace=k8s.io May 8 23:52:55.497987 containerd[1571]: time="2025-05-08T23:52:55.497986135Z" level=warning msg="cleaning up after shim disconnected" id=478d625ebe88efbe9fddb6320b93007aae6bbf052f502bbc39d1e21853b36ddf namespace=k8s.io May 8 23:52:55.498139 containerd[1571]: time="2025-05-08T23:52:55.497996655Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:56.364369 kubelet[2796]: E0508 23:52:56.364166 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:56.369056 containerd[1571]: time="2025-05-08T23:52:56.367829990Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:52:56.394503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089562580.mount: Deactivated successfully. May 8 23:52:56.398617 containerd[1571]: time="2025-05-08T23:52:56.398564414Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a557d626dbca383b7e50d77e7ad2abe59175cad0a2084a65902e94d0bdfddd58\"" May 8 23:52:56.399342 containerd[1571]: time="2025-05-08T23:52:56.399307105Z" level=info msg="StartContainer for \"a557d626dbca383b7e50d77e7ad2abe59175cad0a2084a65902e94d0bdfddd58\"" May 8 23:52:56.456513 containerd[1571]: time="2025-05-08T23:52:56.455971600Z" level=info msg="StartContainer for \"a557d626dbca383b7e50d77e7ad2abe59175cad0a2084a65902e94d0bdfddd58\" returns successfully" May 8 23:52:56.471886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a557d626dbca383b7e50d77e7ad2abe59175cad0a2084a65902e94d0bdfddd58-rootfs.mount: Deactivated successfully. May 8 23:52:56.477182 containerd[1571]: time="2025-05-08T23:52:56.477125839Z" level=info msg="shim disconnected" id=a557d626dbca383b7e50d77e7ad2abe59175cad0a2084a65902e94d0bdfddd58 namespace=k8s.io May 8 23:52:56.477182 containerd[1571]: time="2025-05-08T23:52:56.477179520Z" level=warning msg="cleaning up after shim disconnected" id=a557d626dbca383b7e50d77e7ad2abe59175cad0a2084a65902e94d0bdfddd58 namespace=k8s.io May 8 23:52:56.477182 containerd[1571]: time="2025-05-08T23:52:56.477187600Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:57.199249 kubelet[2796]: E0508 23:52:57.199209 2796 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:52:57.369974 kubelet[2796]: E0508 23:52:57.369062 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:57.374083 containerd[1571]: time="2025-05-08T23:52:57.373896117Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:52:57.409567 containerd[1571]: time="2025-05-08T23:52:57.409417393Z" level=info msg="CreateContainer within sandbox \"2eb63eb0c81eaa25f9ebd1cf59658a1168cbd458f16b685a88cdee4c2c73c607\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"72239a13d66188f566e65a54f7a21c9e54b066eeb68fa25388eb2a5c1bc71b44\"" May 8 23:52:57.410435 containerd[1571]: time="2025-05-08T23:52:57.410387927Z" level=info msg="StartContainer for \"72239a13d66188f566e65a54f7a21c9e54b066eeb68fa25388eb2a5c1bc71b44\"" May 8 23:52:57.442748 systemd[1]: run-containerd-runc-k8s.io-72239a13d66188f566e65a54f7a21c9e54b066eeb68fa25388eb2a5c1bc71b44-runc.Xvdhgp.mount: Deactivated successfully. May 8 23:52:57.487348 containerd[1571]: time="2025-05-08T23:52:57.487036320Z" level=info msg="StartContainer for \"72239a13d66188f566e65a54f7a21c9e54b066eeb68fa25388eb2a5c1bc71b44\" returns successfully" May 8 23:52:58.043801 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 23:52:58.374877 kubelet[2796]: E0508 23:52:58.374760 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:58.399038 kubelet[2796]: I0508 23:52:58.398959 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9g6bb" podStartSLOduration=5.398943424 podStartE2EDuration="5.398943424s" podCreationTimestamp="2025-05-08 23:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:52:58.398598579 +0000 UTC m=+76.350447525" watchObservedRunningTime="2025-05-08 23:52:58.398943424 +0000 UTC m=+76.350792370" May 8 23:52:59.443772 kubelet[2796]: E0508 23:52:59.439185 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:53:00.936932 systemd-networkd[1231]: lxc_health: Link UP May 8 23:53:00.948409 systemd-networkd[1231]: lxc_health: Gained carrier May 8 23:53:01.435522 kubelet[2796]: E0508 23:53:01.435491 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:53:01.846668 systemd[1]: run-containerd-runc-k8s.io-72239a13d66188f566e65a54f7a21c9e54b066eeb68fa25388eb2a5c1bc71b44-runc.27jTTt.mount: Deactivated successfully. May 8 23:53:02.383994 kubelet[2796]: E0508 23:53:02.383748 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:53:02.782015 systemd-networkd[1231]: lxc_health: Gained IPv6LL May 8 23:53:06.172228 sshd[4607]: Connection closed by 10.0.0.1 port 42784 May 8 23:53:06.172979 sshd-session[4601]: pam_unix(sshd:session): session closed for user core May 8 23:53:06.176739 systemd[1]: sshd@24-10.0.0.44:22-10.0.0.1:42784.service: Deactivated successfully. May 8 23:53:06.179080 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. May 8 23:53:06.179197 systemd[1]: session-25.scope: Deactivated successfully. May 8 23:53:06.180887 systemd-logind[1552]: Removed session 25.