May 16 23:47:30.891680 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 23:47:30.891700 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 16 22:17:08 -00 2025 May 16 23:47:30.891710 kernel: KASLR enabled May 16 23:47:30.891715 kernel: efi: EFI v2.7 by EDK II May 16 23:47:30.891721 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 16 23:47:30.891726 kernel: random: crng init done May 16 23:47:30.891733 kernel: secureboot: Secure boot disabled May 16 23:47:30.891739 kernel: ACPI: Early table checksum verification disabled May 16 23:47:30.891745 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 16 23:47:30.891752 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 23:47:30.891758 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891764 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891770 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891776 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891783 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891802 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891808 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891814 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891821 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 23:47:30.891827 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 23:47:30.891833 kernel: NUMA: Failed to initialise from firmware May 16 23:47:30.891839 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 23:47:30.891845 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 16 23:47:30.891851 kernel: Zone ranges: May 16 23:47:30.891857 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 23:47:30.891865 kernel: DMA32 empty May 16 23:47:30.891871 kernel: Normal empty May 16 23:47:30.891877 kernel: Movable zone start for each node May 16 23:47:30.891883 kernel: Early memory node ranges May 16 23:47:30.891889 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 16 23:47:30.891895 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 16 23:47:30.891901 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 16 23:47:30.891907 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 16 23:47:30.891914 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 16 23:47:30.891920 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 16 23:47:30.891926 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 16 23:47:30.891932 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 23:47:30.891939 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 23:47:30.891945 kernel: psci: probing for conduit method from ACPI. May 16 23:47:30.891952 kernel: psci: PSCIv1.1 detected in firmware. May 16 23:47:30.892000 kernel: psci: Using standard PSCI v0.2 function IDs May 16 23:47:30.892007 kernel: psci: Trusted OS migration not required May 16 23:47:30.892014 kernel: psci: SMC Calling Convention v1.1 May 16 23:47:30.892022 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 23:47:30.892029 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 16 23:47:30.892035 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 16 23:47:30.892042 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 23:47:30.892048 kernel: Detected PIPT I-cache on CPU0 May 16 23:47:30.892055 kernel: CPU features: detected: GIC system register CPU interface May 16 23:47:30.892061 kernel: CPU features: detected: Hardware dirty bit management May 16 23:47:30.892068 kernel: CPU features: detected: Spectre-v4 May 16 23:47:30.892074 kernel: CPU features: detected: Spectre-BHB May 16 23:47:30.892081 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 23:47:30.892089 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 23:47:30.892095 kernel: CPU features: detected: ARM erratum 1418040 May 16 23:47:30.892102 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 23:47:30.892108 kernel: alternatives: applying boot alternatives May 16 23:47:30.892116 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=927b8b75c68667a15a593c357c52795147d962dd3e649d9b89e3ea80e5637eb6 May 16 23:47:30.892123 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 23:47:30.892130 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 23:47:30.892136 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 23:47:30.892143 kernel: Fallback order for Node 0: 0 May 16 23:47:30.892150 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 23:47:30.892156 kernel: Policy zone: DMA May 16 23:47:30.892164 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 23:47:30.892171 kernel: software IO TLB: area num 4. May 16 23:47:30.892177 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 16 23:47:30.892185 kernel: Memory: 2386256K/2572288K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 186032K reserved, 0K cma-reserved) May 16 23:47:30.892192 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 23:47:30.892198 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 23:47:30.892206 kernel: rcu: RCU event tracing is enabled. May 16 23:47:30.892212 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 23:47:30.892219 kernel: Trampoline variant of Tasks RCU enabled. May 16 23:47:30.892226 kernel: Tracing variant of Tasks RCU enabled. May 16 23:47:30.892233 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 23:47:30.892239 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 23:47:30.892247 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 23:47:30.892254 kernel: GICv3: 256 SPIs implemented May 16 23:47:30.892260 kernel: GICv3: 0 Extended SPIs implemented May 16 23:47:30.892266 kernel: Root IRQ handler: gic_handle_irq May 16 23:47:30.892273 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 16 23:47:30.892279 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 23:47:30.892286 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 23:47:30.892293 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 16 23:47:30.892299 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 16 23:47:30.892306 kernel: GICv3: using LPI property table @0x00000000400f0000 May 16 23:47:30.892312 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 16 23:47:30.892320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 23:47:30.892327 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 23:47:30.892334 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 23:47:30.892340 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 23:47:30.892347 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 23:47:30.892353 kernel: arm-pv: using stolen time PV May 16 23:47:30.892360 kernel: Console: colour dummy device 80x25 May 16 23:47:30.892367 kernel: ACPI: Core revision 20230628 May 16 23:47:30.892374 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 23:47:30.892381 kernel: pid_max: default: 32768 minimum: 301 May 16 23:47:30.892389 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 23:47:30.892396 kernel: landlock: Up and running. May 16 23:47:30.892402 kernel: SELinux: Initializing. May 16 23:47:30.892409 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 23:47:30.892416 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 23:47:30.892423 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 23:47:30.892430 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 23:47:30.892437 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 23:47:30.892443 kernel: rcu: Hierarchical SRCU implementation. May 16 23:47:30.892451 kernel: rcu: Max phase no-delay instances is 400. May 16 23:47:30.892458 kernel: Platform MSI: ITS@0x8080000 domain created May 16 23:47:30.892465 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 23:47:30.892471 kernel: Remapping and enabling EFI services. May 16 23:47:30.892478 kernel: smp: Bringing up secondary CPUs ... May 16 23:47:30.892485 kernel: Detected PIPT I-cache on CPU1 May 16 23:47:30.892492 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 23:47:30.892499 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 16 23:47:30.892505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 23:47:30.892512 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 23:47:30.892520 kernel: Detected PIPT I-cache on CPU2 May 16 23:47:30.892527 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 23:47:30.892538 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 16 23:47:30.892553 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 23:47:30.892560 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 23:47:30.892567 kernel: Detected PIPT I-cache on CPU3 May 16 23:47:30.892574 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 23:47:30.892581 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 16 23:47:30.892588 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 23:47:30.892595 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 23:47:30.892604 kernel: smp: Brought up 1 node, 4 CPUs May 16 23:47:30.892611 kernel: SMP: Total of 4 processors activated. May 16 23:47:30.892618 kernel: CPU features: detected: 32-bit EL0 Support May 16 23:47:30.892625 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 23:47:30.892632 kernel: CPU features: detected: Common not Private translations May 16 23:47:30.892639 kernel: CPU features: detected: CRC32 instructions May 16 23:47:30.892646 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 23:47:30.892655 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 23:47:30.892662 kernel: CPU features: detected: LSE atomic instructions May 16 23:47:30.892669 kernel: CPU features: detected: Privileged Access Never May 16 23:47:30.892676 kernel: CPU features: detected: RAS Extension Support May 16 23:47:30.892683 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 23:47:30.892690 kernel: CPU: All CPU(s) started at EL1 May 16 23:47:30.892697 kernel: alternatives: applying system-wide alternatives May 16 23:47:30.892704 kernel: devtmpfs: initialized May 16 23:47:30.892711 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 23:47:30.892719 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 23:47:30.892726 kernel: pinctrl core: initialized pinctrl subsystem May 16 23:47:30.892733 kernel: SMBIOS 3.0.0 present. May 16 23:47:30.892740 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 16 23:47:30.892747 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 23:47:30.892755 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 23:47:30.892762 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 23:47:30.892769 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 23:47:30.892776 kernel: audit: initializing netlink subsys (disabled) May 16 23:47:30.892784 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 16 23:47:30.892798 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 23:47:30.892806 kernel: cpuidle: using governor menu May 16 23:47:30.892813 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 23:47:30.892820 kernel: ASID allocator initialised with 32768 entries May 16 23:47:30.892827 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 23:47:30.892834 kernel: Serial: AMBA PL011 UART driver May 16 23:47:30.892841 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 23:47:30.892848 kernel: Modules: 0 pages in range for non-PLT usage May 16 23:47:30.892857 kernel: Modules: 508944 pages in range for PLT usage May 16 23:47:30.892864 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 23:47:30.892871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 23:47:30.892878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 23:47:30.892885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 23:47:30.892892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 23:47:30.892899 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 23:47:30.892907 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 23:47:30.892914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 23:47:30.892922 kernel: ACPI: Added _OSI(Module Device) May 16 23:47:30.892929 kernel: ACPI: Added _OSI(Processor Device) May 16 23:47:30.892936 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 23:47:30.892943 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 23:47:30.892950 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 23:47:30.892957 kernel: ACPI: Interpreter enabled May 16 23:47:30.892964 kernel: ACPI: Using GIC for interrupt routing May 16 23:47:30.892971 kernel: ACPI: MCFG table detected, 1 entries May 16 23:47:30.892978 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 23:47:30.892987 kernel: printk: console [ttyAMA0] enabled May 16 23:47:30.892994 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 23:47:30.893129 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 23:47:30.893202 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 23:47:30.893268 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 23:47:30.893336 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 23:47:30.893408 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 23:47:30.893420 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 23:47:30.893428 kernel: PCI host bridge to bus 0000:00 May 16 23:47:30.893522 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 23:47:30.893589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 23:47:30.893678 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 23:47:30.893742 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 23:47:30.893866 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 23:47:30.893951 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 23:47:30.894019 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 23:47:30.894085 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 23:47:30.894149 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 23:47:30.894217 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 23:47:30.894285 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 23:47:30.894362 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 23:47:30.894425 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 23:47:30.894484 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 23:47:30.894542 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 23:47:30.894559 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 23:47:30.894567 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 23:47:30.894574 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 23:47:30.894581 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 23:47:30.894589 kernel: iommu: Default domain type: Translated May 16 23:47:30.894599 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 23:47:30.894607 kernel: efivars: Registered efivars operations May 16 23:47:30.894614 kernel: vgaarb: loaded May 16 23:47:30.894622 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 23:47:30.894629 kernel: VFS: Disk quotas dquot_6.6.0 May 16 23:47:30.894637 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 23:47:30.894644 kernel: pnp: PnP ACPI init May 16 23:47:30.894733 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 23:47:30.894746 kernel: pnp: PnP ACPI: found 1 devices May 16 23:47:30.894754 kernel: NET: Registered PF_INET protocol family May 16 23:47:30.894762 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 23:47:30.894770 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 23:47:30.894777 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 23:47:30.894785 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 23:47:30.894802 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 23:47:30.894810 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 23:47:30.894817 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 23:47:30.894826 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 23:47:30.894834 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 23:47:30.894841 kernel: PCI: CLS 0 bytes, default 64 May 16 23:47:30.894849 kernel: kvm [1]: HYP mode not available May 16 23:47:30.894856 kernel: Initialise system trusted keyrings May 16 23:47:30.894864 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 23:47:30.894871 kernel: Key type asymmetric registered May 16 23:47:30.894879 kernel: Asymmetric key parser 'x509' registered May 16 23:47:30.894886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 23:47:30.894895 kernel: io scheduler mq-deadline registered May 16 23:47:30.894903 kernel: io scheduler kyber registered May 16 23:47:30.894910 kernel: io scheduler bfq registered May 16 23:47:30.894918 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 23:47:30.894925 kernel: ACPI: button: Power Button [PWRB] May 16 23:47:30.894933 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 23:47:30.895003 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 23:47:30.895014 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 23:47:30.895021 kernel: thunder_xcv, ver 1.0 May 16 23:47:30.895030 kernel: thunder_bgx, ver 1.0 May 16 23:47:30.895037 kernel: nicpf, ver 1.0 May 16 23:47:30.895044 kernel: nicvf, ver 1.0 May 16 23:47:30.895118 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 23:47:30.895180 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T23:47:30 UTC (1747439250) May 16 23:47:30.895190 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 23:47:30.895198 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 23:47:30.895205 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 16 23:47:30.895214 kernel: watchdog: Hard watchdog permanently disabled May 16 23:47:30.895222 kernel: NET: Registered PF_INET6 protocol family May 16 23:47:30.895229 kernel: Segment Routing with IPv6 May 16 23:47:30.895236 kernel: In-situ OAM (IOAM) with IPv6 May 16 23:47:30.895244 kernel: NET: Registered PF_PACKET protocol family May 16 23:47:30.895254 kernel: Key type dns_resolver registered May 16 23:47:30.895261 kernel: registered taskstats version 1 May 16 23:47:30.895269 kernel: Loading compiled-in X.509 certificates May 16 23:47:30.895276 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: ce735021e0bf4130c76292f4d9f4a150f3612820' May 16 23:47:30.895285 kernel: Key type .fscrypt registered May 16 23:47:30.895292 kernel: Key type fscrypt-provisioning registered May 16 23:47:30.895300 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 23:47:30.895307 kernel: ima: Allocated hash algorithm: sha1 May 16 23:47:30.895314 kernel: ima: No architecture policies found May 16 23:47:30.895322 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 23:47:30.895329 kernel: clk: Disabling unused clocks May 16 23:47:30.895336 kernel: Freeing unused kernel memory: 39744K May 16 23:47:30.895344 kernel: Run /init as init process May 16 23:47:30.895353 kernel: with arguments: May 16 23:47:30.895360 kernel: /init May 16 23:47:30.895367 kernel: with environment: May 16 23:47:30.895374 kernel: HOME=/ May 16 23:47:30.895382 kernel: TERM=linux May 16 23:47:30.895389 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 23:47:30.895398 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 23:47:30.895408 systemd[1]: Detected virtualization kvm. May 16 23:47:30.895417 systemd[1]: Detected architecture arm64. May 16 23:47:30.895425 systemd[1]: Running in initrd. May 16 23:47:30.895433 systemd[1]: No hostname configured, using default hostname. May 16 23:47:30.895440 systemd[1]: Hostname set to . May 16 23:47:30.895448 systemd[1]: Initializing machine ID from VM UUID. May 16 23:47:30.895456 systemd[1]: Queued start job for default target initrd.target. May 16 23:47:30.895463 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 23:47:30.895471 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 23:47:30.895481 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 23:47:30.895489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 23:47:30.895497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 23:47:30.895506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 23:47:30.895517 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 23:47:30.895525 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 23:47:30.895534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 23:47:30.895542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 23:47:30.895556 systemd[1]: Reached target paths.target - Path Units. May 16 23:47:30.895566 systemd[1]: Reached target slices.target - Slice Units. May 16 23:47:30.895576 systemd[1]: Reached target swap.target - Swaps. May 16 23:47:30.895584 systemd[1]: Reached target timers.target - Timer Units. May 16 23:47:30.895592 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 23:47:30.895600 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 23:47:30.895608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 23:47:30.895618 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 23:47:30.895627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 23:47:30.895634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 23:47:30.895642 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 23:47:30.895650 systemd[1]: Reached target sockets.target - Socket Units. May 16 23:47:30.895658 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 23:47:30.895666 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 23:47:30.895673 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 23:47:30.895681 systemd[1]: Starting systemd-fsck-usr.service... May 16 23:47:30.895690 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 23:47:30.895698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 23:47:30.895706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 23:47:30.895713 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 23:47:30.895721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 23:47:30.895729 systemd[1]: Finished systemd-fsck-usr.service. May 16 23:47:30.895739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 23:47:30.895747 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 23:47:30.895755 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 23:47:30.895802 systemd-journald[240]: Collecting audit messages is disabled. May 16 23:47:30.895822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 23:47:30.895830 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 23:47:30.895838 systemd-journald[240]: Journal started May 16 23:47:30.895857 systemd-journald[240]: Runtime Journal (/run/log/journal/b4ce03583fde429d987488eb3d2fda40) is 5.9M, max 47.3M, 41.4M free. May 16 23:47:30.875661 systemd-modules-load[241]: Inserted module 'overlay' May 16 23:47:30.898346 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 23:47:30.901980 systemd[1]: Started systemd-journald.service - Journal Service. May 16 23:47:30.902005 kernel: Bridge firewalling registered May 16 23:47:30.902003 systemd-modules-load[241]: Inserted module 'br_netfilter' May 16 23:47:30.902370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 23:47:30.903748 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 23:47:30.919995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 23:47:30.921320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 23:47:30.923816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 23:47:30.925945 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 23:47:30.928887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 23:47:30.934679 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 23:47:30.936981 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 23:47:30.942215 dracut-cmdline[273]: dracut-dracut-053 May 16 23:47:30.944873 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=927b8b75c68667a15a593c357c52795147d962dd3e649d9b89e3ea80e5637eb6 May 16 23:47:30.979959 systemd-resolved[283]: Positive Trust Anchors: May 16 23:47:30.980184 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 23:47:30.980217 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 23:47:30.989675 systemd-resolved[283]: Defaulting to hostname 'linux'. May 16 23:47:30.990947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 23:47:30.992652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 23:47:31.020832 kernel: SCSI subsystem initialized May 16 23:47:31.024815 kernel: Loading iSCSI transport class v2.0-870. May 16 23:47:31.032824 kernel: iscsi: registered transport (tcp) May 16 23:47:31.046817 kernel: iscsi: registered transport (qla4xxx) May 16 23:47:31.046839 kernel: QLogic iSCSI HBA Driver May 16 23:47:31.095876 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 23:47:31.102950 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 23:47:31.118816 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 23:47:31.118873 kernel: device-mapper: uevent: version 1.0.3 May 16 23:47:31.119813 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 23:47:31.166814 kernel: raid6: neonx8 gen() 15783 MB/s May 16 23:47:31.183800 kernel: raid6: neonx4 gen() 15665 MB/s May 16 23:47:31.200803 kernel: raid6: neonx2 gen() 13239 MB/s May 16 23:47:31.217802 kernel: raid6: neonx1 gen() 10488 MB/s May 16 23:47:31.234801 kernel: raid6: int64x8 gen() 6958 MB/s May 16 23:47:31.251802 kernel: raid6: int64x4 gen() 7311 MB/s May 16 23:47:31.268803 kernel: raid6: int64x2 gen() 6123 MB/s May 16 23:47:31.285802 kernel: raid6: int64x1 gen() 5050 MB/s May 16 23:47:31.285818 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s May 16 23:47:31.302810 kernel: raid6: .... xor() 11912 MB/s, rmw enabled May 16 23:47:31.302826 kernel: raid6: using neon recovery algorithm May 16 23:47:31.307850 kernel: xor: measuring software checksum speed May 16 23:47:31.307867 kernel: 8regs : 19721 MB/sec May 16 23:47:31.308876 kernel: 32regs : 19631 MB/sec May 16 23:47:31.308892 kernel: arm64_neon : 26831 MB/sec May 16 23:47:31.308901 kernel: xor: using function: arm64_neon (26831 MB/sec) May 16 23:47:31.360134 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 23:47:31.371099 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 23:47:31.385941 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 23:47:31.397257 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 16 23:47:31.400400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 23:47:31.402849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 23:47:31.417950 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 16 23:47:31.445840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 23:47:31.455913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 23:47:31.494305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 23:47:31.501127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 23:47:31.512048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 23:47:31.513188 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 23:47:31.514462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 23:47:31.515324 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 23:47:31.526023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 23:47:31.531170 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 16 23:47:31.531340 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 23:47:31.535947 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 23:47:31.535980 kernel: GPT:9289727 != 19775487 May 16 23:47:31.535991 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 23:47:31.536837 kernel: GPT:9289727 != 19775487 May 16 23:47:31.537842 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 23:47:31.537869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 23:47:31.538718 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 23:47:31.542353 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 23:47:31.542457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 23:47:31.549335 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 23:47:31.550160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 23:47:31.550284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 23:47:31.552194 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 23:47:31.562693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 23:47:31.566194 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) May 16 23:47:31.566214 kernel: BTRFS: device fsid cbab1542-31e5-4eed-b266-dabd50022812 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (516) May 16 23:47:31.575902 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 23:47:31.576983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 23:47:31.585473 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 23:47:31.589612 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 23:47:31.593141 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 23:47:31.594051 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 23:47:31.611940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 23:47:31.613955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 23:47:31.618325 disk-uuid[554]: Primary Header is updated. May 16 23:47:31.618325 disk-uuid[554]: Secondary Entries is updated. May 16 23:47:31.618325 disk-uuid[554]: Secondary Header is updated. May 16 23:47:31.624035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 23:47:31.635614 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 23:47:32.635111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 23:47:32.635185 disk-uuid[555]: The operation has completed successfully. May 16 23:47:32.654977 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 23:47:32.655074 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 23:47:32.673943 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 23:47:32.676695 sh[574]: Success May 16 23:47:32.688911 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 23:47:32.714414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 23:47:32.728046 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 23:47:32.729496 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 23:47:32.738449 kernel: BTRFS info (device dm-0): first mount of filesystem cbab1542-31e5-4eed-b266-dabd50022812 May 16 23:47:32.738494 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 23:47:32.738515 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 23:47:32.739874 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 23:47:32.739889 kernel: BTRFS info (device dm-0): using free space tree May 16 23:47:32.743602 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 23:47:32.744676 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 23:47:32.753919 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 23:47:32.755220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 23:47:32.763296 kernel: BTRFS info (device vda6): first mount of filesystem f8f33e40-0bbe-4221-b2c5-a47f59cd3479 May 16 23:47:32.763337 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 23:47:32.763347 kernel: BTRFS info (device vda6): using free space tree May 16 23:47:32.764803 kernel: BTRFS info (device vda6): auto enabling async discard May 16 23:47:32.771646 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 23:47:32.772976 kernel: BTRFS info (device vda6): last unmount of filesystem f8f33e40-0bbe-4221-b2c5-a47f59cd3479 May 16 23:47:32.777218 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 23:47:32.782950 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 23:47:32.847825 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 23:47:32.857942 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 23:47:32.878883 ignition[663]: Ignition 2.20.0 May 16 23:47:32.878893 ignition[663]: Stage: fetch-offline May 16 23:47:32.878943 systemd-networkd[768]: lo: Link UP May 16 23:47:32.878925 ignition[663]: no configs at "/usr/lib/ignition/base.d" May 16 23:47:32.878947 systemd-networkd[768]: lo: Gained carrier May 16 23:47:32.878934 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 23:47:32.879651 systemd-networkd[768]: Enumeration completed May 16 23:47:32.879080 ignition[663]: parsed url from cmdline: "" May 16 23:47:32.879755 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 23:47:32.879083 ignition[663]: no config URL provided May 16 23:47:32.880048 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 23:47:32.879088 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" May 16 23:47:32.880051 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 23:47:32.879094 ignition[663]: no config at "/usr/lib/ignition/user.ign" May 16 23:47:32.880778 systemd-networkd[768]: eth0: Link UP May 16 23:47:32.879118 ignition[663]: op(1): [started] loading QEMU firmware config module May 16 23:47:32.880781 systemd-networkd[768]: eth0: Gained carrier May 16 23:47:32.879123 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 23:47:32.880804 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 23:47:32.889640 ignition[663]: op(1): [finished] loading QEMU firmware config module May 16 23:47:32.882959 systemd[1]: Reached target network.target - Network. May 16 23:47:32.899826 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 23:47:32.931588 ignition[663]: parsing config with SHA512: 69f624595df16b1334f67046df5c62b3a4435d0056146e665b659e1f29c1681214c803f2728e719cb659f87ede068173df38e17eac8315c05cf4b151f5dac958 May 16 23:47:32.937501 unknown[663]: fetched base config from "system" May 16 23:47:32.937514 unknown[663]: fetched user config from "qemu" May 16 23:47:32.938112 ignition[663]: fetch-offline: fetch-offline passed May 16 23:47:32.938206 ignition[663]: Ignition finished successfully May 16 23:47:32.940383 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 23:47:32.941512 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 23:47:32.947962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 23:47:32.957741 ignition[774]: Ignition 2.20.0 May 16 23:47:32.957762 ignition[774]: Stage: kargs May 16 23:47:32.957937 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 16 23:47:32.957947 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 23:47:32.958805 ignition[774]: kargs: kargs passed May 16 23:47:32.958847 ignition[774]: Ignition finished successfully May 16 23:47:32.960814 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 23:47:32.962864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 23:47:32.976011 ignition[783]: Ignition 2.20.0 May 16 23:47:32.976020 ignition[783]: Stage: disks May 16 23:47:32.976168 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 16 23:47:32.976177 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 23:47:32.978173 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 23:47:32.977010 ignition[783]: disks: disks passed May 16 23:47:32.977050 ignition[783]: Ignition finished successfully May 16 23:47:32.980932 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 23:47:32.981773 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 23:47:32.983404 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 23:47:32.984889 systemd[1]: Reached target sysinit.target - System Initialization. May 16 23:47:32.986206 systemd[1]: Reached target basic.target - Basic System. May 16 23:47:32.999055 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 23:47:33.008853 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 23:47:33.012559 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 23:47:33.014387 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 23:47:33.063598 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 23:47:33.064777 kernel: EXT4-fs (vda9): mounted filesystem c7e94867-2074-4400-b561-602a5d7fe7b3 r/w with ordered data mode. Quota mode: none. May 16 23:47:33.064676 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 23:47:33.075863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 23:47:33.077334 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 23:47:33.078551 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 23:47:33.078590 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 23:47:33.083606 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) May 16 23:47:33.078611 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 23:47:33.084862 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 23:47:33.088728 kernel: BTRFS info (device vda6): first mount of filesystem f8f33e40-0bbe-4221-b2c5-a47f59cd3479 May 16 23:47:33.088746 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 23:47:33.088757 kernel: BTRFS info (device vda6): using free space tree May 16 23:47:33.088767 kernel: BTRFS info (device vda6): auto enabling async discard May 16 23:47:33.089459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 23:47:33.101921 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 23:47:33.135328 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 16 23:47:33.138427 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 16 23:47:33.141609 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 16 23:47:33.145794 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 16 23:47:33.222033 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 23:47:33.231162 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 23:47:33.232558 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 23:47:33.236811 kernel: BTRFS info (device vda6): last unmount of filesystem f8f33e40-0bbe-4221-b2c5-a47f59cd3479 May 16 23:47:33.252654 ignition[917]: INFO : Ignition 2.20.0 May 16 23:47:33.252654 ignition[917]: INFO : Stage: mount May 16 23:47:33.254005 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 23:47:33.254005 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 23:47:33.254005 ignition[917]: INFO : mount: mount passed May 16 23:47:33.254005 ignition[917]: INFO : Ignition finished successfully May 16 23:47:33.254063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 23:47:33.255773 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 23:47:33.263884 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 23:47:33.738063 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 23:47:33.748958 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 23:47:33.753814 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (932) May 16 23:47:33.755464 kernel: BTRFS info (device vda6): first mount of filesystem f8f33e40-0bbe-4221-b2c5-a47f59cd3479 May 16 23:47:33.755479 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 23:47:33.755489 kernel: BTRFS info (device vda6): using free space tree May 16 23:47:33.757808 kernel: BTRFS info (device vda6): auto enabling async discard May 16 23:47:33.758926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 23:47:33.773603 ignition[949]: INFO : Ignition 2.20.0 May 16 23:47:33.773603 ignition[949]: INFO : Stage: files May 16 23:47:33.774815 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 23:47:33.774815 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 23:47:33.774815 ignition[949]: DEBUG : files: compiled without relabeling support, skipping May 16 23:47:33.777521 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 23:47:33.777521 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 23:47:33.777521 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 23:47:33.777521 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 23:47:33.781460 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 23:47:33.781460 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 23:47:33.781460 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 16 23:47:33.777842 unknown[949]: wrote ssh authorized keys file for user: core May 16 23:47:33.857193 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 23:47:34.047625 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 23:47:34.047625 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 23:47:34.050492 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 16 23:47:34.377397 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 23:47:34.429958 systemd-networkd[768]: eth0: Gained IPv6LL May 16 23:47:34.447423 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 23:47:34.448952 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 16 23:47:34.844280 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 23:47:35.132472 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 23:47:35.132472 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 23:47:35.135261 ignition[949]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 23:47:35.155399 ignition[949]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 23:47:35.159175 ignition[949]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 23:47:35.161339 ignition[949]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 23:47:35.161339 ignition[949]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 23:47:35.161339 ignition[949]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 23:47:35.161339 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 23:47:35.161339 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 23:47:35.161339 ignition[949]: INFO : files: files passed May 16 23:47:35.161339 ignition[949]: INFO : Ignition finished successfully May 16 23:47:35.162120 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 23:47:35.174947 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 23:47:35.177144 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 23:47:35.178263 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 23:47:35.178365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 23:47:35.184454 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory May 16 23:47:35.187562 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 23:47:35.187562 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 23:47:35.189951 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 23:47:35.189569 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 23:47:35.191261 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 23:47:35.197259 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 23:47:35.215878 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 23:47:35.216732 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 23:47:35.220056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 23:47:35.220911 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 23:47:35.221702 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 23:47:35.222519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 23:47:35.237591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 23:47:35.244965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 23:47:35.252421 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 23:47:35.253460 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 23:47:35.255034 systemd[1]: Stopped target timers.target - Timer Units. May 16 23:47:35.256388 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 23:47:35.256495 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 23:47:35.258524 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 23:47:35.260211 systemd[1]: Stopped target basic.target - Basic System. May 16 23:47:35.261436 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 23:47:35.262667 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 23:47:35.264131 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 23:47:35.265683 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 23:47:35.267040 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 23:47:35.268468 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 23:47:35.269894 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 23:47:35.271286 systemd[1]: Stopped target swap.target - Swaps. May 16 23:47:35.272470 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 23:47:35.272596 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 23:47:35.274331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 23:47:35.275707 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 23:47:35.277137 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 23:47:35.277879 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 23:47:35.279355 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 23:47:35.279465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 23:47:35.281565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 23:47:35.281676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 23:47:35.283196 systemd[1]: Stopped target paths.target - Path Units. May 16 23:47:35.284550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 23:47:35.285863 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 23:47:35.286876 systemd[1]: Stopped target slices.target - Slice Units. May 16 23:47:35.288283 systemd[1]: Stopped target sockets.target - Socket Units. May 16 23:47:35.289900 systemd[1]: iscsid.socket: Deactivated successfully. May 16 23:47:35.290006 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 23:47:35.291110 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 23:47:35.291192 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 23:47:35.292459 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 23:47:35.292570 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 23:47:35.293869 systemd[1]: ignition-files.service: Deactivated successfully. May 16 23:47:35.293966 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 23:47:35.304964 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 23:47:35.306346 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 23:47:35.307062 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 23:47:35.307175 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 23:47:35.308689 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 23:47:35.308855 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 23:47:35.314514 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 23:47:35.314621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 23:47:35.318397 ignition[1006]: INFO : Ignition 2.20.0 May 16 23:47:35.318397 ignition[1006]: INFO : Stage: umount May 16 23:47:35.318397 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 23:47:35.318397 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 23:47:35.318397 ignition[1006]: INFO : umount: umount passed May 16 23:47:35.318397 ignition[1006]: INFO : Ignition finished successfully May 16 23:47:35.318733 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 23:47:35.318850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 23:47:35.321102 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 23:47:35.321507 systemd[1]: Stopped target network.target - Network. May 16 23:47:35.322401 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 23:47:35.322459 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 23:47:35.323872 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 23:47:35.323913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 23:47:35.325129 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 23:47:35.325167 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 23:47:35.326454 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 23:47:35.326494 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 23:47:35.329250 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 23:47:35.330910 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 23:47:35.332881 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 23:47:35.332968 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 23:47:35.334991 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 23:47:35.335129 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 23:47:35.339659 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 23:47:35.339804 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 23:47:35.340839 systemd-networkd[768]: eth0: DHCPv6 lease lost May 16 23:47:35.343017 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 23:47:35.343121 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 23:47:35.345147 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 23:47:35.345204 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 23:47:35.352906 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 23:47:35.353615 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 23:47:35.353674 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 23:47:35.355231 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 23:47:35.355271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 23:47:35.356920 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 23:47:35.356961 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 23:47:35.358627 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 23:47:35.358669 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 23:47:35.360514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 23:47:35.373670 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 23:47:35.373819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 23:47:35.379453 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 23:47:35.379632 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 23:47:35.381815 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 23:47:35.381857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 23:47:35.382767 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 23:47:35.382819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 23:47:35.384862 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 23:47:35.384910 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 23:47:35.387473 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 23:47:35.387515 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 23:47:35.389871 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 23:47:35.389909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 23:47:35.400933 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 23:47:35.401696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 23:47:35.401751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 23:47:35.403825 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 23:47:35.403871 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 23:47:35.405677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 23:47:35.405712 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 23:47:35.407744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 23:47:35.407783 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 23:47:35.409913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 23:47:35.410858 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 23:47:35.412190 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 23:47:35.414118 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 23:47:35.424239 systemd[1]: Switching root. May 16 23:47:35.450832 systemd-journald[240]: Journal stopped May 16 23:47:36.161660 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). May 16 23:47:36.161716 kernel: SELinux: policy capability network_peer_controls=1 May 16 23:47:36.161728 kernel: SELinux: policy capability open_perms=1 May 16 23:47:36.161741 kernel: SELinux: policy capability extended_socket_class=1 May 16 23:47:36.161750 kernel: SELinux: policy capability always_check_network=0 May 16 23:47:36.161760 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 23:47:36.161769 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 23:47:36.161778 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 23:47:36.161829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 23:47:36.161846 systemd[1]: Successfully loaded SELinux policy in 34.090ms. May 16 23:47:36.161862 kernel: audit: type=1403 audit(1747439255.617:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 23:47:36.161876 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.543ms. May 16 23:47:36.161888 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 23:47:36.161899 systemd[1]: Detected virtualization kvm. May 16 23:47:36.161912 systemd[1]: Detected architecture arm64. May 16 23:47:36.161923 systemd[1]: Detected first boot. May 16 23:47:36.161934 systemd[1]: Initializing machine ID from VM UUID. May 16 23:47:36.161945 zram_generator::config[1051]: No configuration found. May 16 23:47:36.161957 systemd[1]: Populated /etc with preset unit settings. May 16 23:47:36.161967 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 23:47:36.161979 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 23:47:36.161991 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 23:47:36.162003 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 23:47:36.162013 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 23:47:36.162024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 23:47:36.162034 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 23:47:36.162045 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 23:47:36.162055 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 23:47:36.162067 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 23:47:36.162077 systemd[1]: Created slice user.slice - User and Session Slice. May 16 23:47:36.162088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 23:47:36.162099 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 23:47:36.162110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 23:47:36.162120 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 23:47:36.162131 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 23:47:36.162144 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 23:47:36.162154 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 23:47:36.162166 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 23:47:36.162176 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 23:47:36.162187 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 23:47:36.162197 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 23:47:36.162208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 23:47:36.162218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 23:47:36.162232 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 23:47:36.162242 systemd[1]: Reached target slices.target - Slice Units. May 16 23:47:36.162255 systemd[1]: Reached target swap.target - Swaps. May 16 23:47:36.162266 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 23:47:36.162277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 23:47:36.162288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 23:47:36.162298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 23:47:36.162309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 23:47:36.162319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 23:47:36.162330 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 23:47:36.162340 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 23:47:36.162352 systemd[1]: Mounting media.mount - External Media Directory... May 16 23:47:36.162372 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 23:47:36.162384 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 23:47:36.162394 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 23:47:36.162406 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 23:47:36.162416 systemd[1]: Reached target machines.target - Containers. May 16 23:47:36.162426 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 23:47:36.162437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 23:47:36.162448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 23:47:36.162459 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 23:47:36.162469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 23:47:36.162479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 23:47:36.162490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 23:47:36.162500 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 23:47:36.162511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 23:47:36.162522 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 23:47:36.162539 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 23:47:36.162553 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 23:47:36.162564 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 23:47:36.162574 systemd[1]: Stopped systemd-fsck-usr.service. May 16 23:47:36.162585 kernel: fuse: init (API version 7.39) May 16 23:47:36.162596 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 23:47:36.162606 kernel: loop: module loaded May 16 23:47:36.162617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 23:47:36.162627 kernel: ACPI: bus type drm_connector registered May 16 23:47:36.162637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 23:47:36.162650 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 23:47:36.162660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 23:47:36.162671 systemd[1]: verity-setup.service: Deactivated successfully. May 16 23:47:36.162682 systemd[1]: Stopped verity-setup.service. May 16 23:47:36.162692 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 23:47:36.162702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 23:47:36.162717 systemd[1]: Mounted media.mount - External Media Directory. May 16 23:47:36.162748 systemd-journald[1118]: Collecting audit messages is disabled. May 16 23:47:36.162771 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 23:47:36.162784 systemd-journald[1118]: Journal started May 16 23:47:36.163154 systemd-journald[1118]: Runtime Journal (/run/log/journal/b4ce03583fde429d987488eb3d2fda40) is 5.9M, max 47.3M, 41.4M free. May 16 23:47:36.163209 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 23:47:35.984710 systemd[1]: Queued start job for default target multi-user.target. May 16 23:47:35.999385 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 23:47:35.999751 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 23:47:36.166822 systemd[1]: Started systemd-journald.service - Journal Service. May 16 23:47:36.167085 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 23:47:36.168759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 23:47:36.170187 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 23:47:36.171368 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 23:47:36.171508 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 23:47:36.172670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 23:47:36.172849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 23:47:36.173951 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 23:47:36.174078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 23:47:36.175162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 23:47:36.175295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 23:47:36.176720 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 23:47:36.176877 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 23:47:36.177934 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 23:47:36.178066 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 23:47:36.179267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 23:47:36.180469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 23:47:36.181747 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 23:47:36.194384 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 23:47:36.200897 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 23:47:36.202776 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 23:47:36.203623 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 23:47:36.203666 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 23:47:36.205395 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 23:47:36.207401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 23:47:36.209271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 23:47:36.210247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 23:47:36.211607 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 23:47:36.215692 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 23:47:36.216776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 23:47:36.219971 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 23:47:36.222624 systemd-journald[1118]: Time spent on flushing to /var/log/journal/b4ce03583fde429d987488eb3d2fda40 is 17.047ms for 858 entries. May 16 23:47:36.222624 systemd-journald[1118]: System Journal (/var/log/journal/b4ce03583fde429d987488eb3d2fda40) is 8.0M, max 195.6M, 187.6M free. May 16 23:47:36.256652 systemd-journald[1118]: Received client request to flush runtime journal. May 16 23:47:36.256702 kernel: loop0: detected capacity change from 0 to 113536 May 16 23:47:36.223101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 23:47:36.227113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 23:47:36.232006 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 23:47:36.236998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 23:47:36.239839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 23:47:36.241326 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 23:47:36.242314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 23:47:36.244032 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 23:47:36.247225 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 23:47:36.251399 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 23:47:36.264542 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 23:47:36.268827 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 23:47:36.269092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 23:47:36.273344 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 23:47:36.274700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 23:47:36.291194 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 23:47:36.294936 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. May 16 23:47:36.294954 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. May 16 23:47:36.297654 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 23:47:36.298611 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 23:47:36.300334 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 23:47:36.310958 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 23:47:36.315818 kernel: loop1: detected capacity change from 0 to 116808 May 16 23:47:36.336925 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 23:47:36.344817 kernel: loop2: detected capacity change from 0 to 207008 May 16 23:47:36.346955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 23:47:36.365138 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 16 23:47:36.365155 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 16 23:47:36.369998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 23:47:36.383833 kernel: loop3: detected capacity change from 0 to 113536 May 16 23:47:36.389821 kernel: loop4: detected capacity change from 0 to 116808 May 16 23:47:36.394808 kernel: loop5: detected capacity change from 0 to 207008 May 16 23:47:36.398854 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 23:47:36.399244 (sd-merge)[1190]: Merged extensions into '/usr'. May 16 23:47:36.402708 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 16 23:47:36.402724 systemd[1]: Reloading... May 16 23:47:36.445827 zram_generator::config[1213]: No configuration found. May 16 23:47:36.494177 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 23:47:36.557131 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 23:47:36.593348 systemd[1]: Reloading finished in 190 ms. May 16 23:47:36.640946 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 23:47:36.644902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 23:47:36.659978 systemd[1]: Starting ensure-sysext.service... May 16 23:47:36.661889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 23:47:36.670219 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... May 16 23:47:36.670236 systemd[1]: Reloading... May 16 23:47:36.679779 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 23:47:36.680051 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 23:47:36.680679 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 23:47:36.680915 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 16 23:47:36.680964 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 16 23:47:36.685200 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 16 23:47:36.685213 systemd-tmpfiles[1251]: Skipping /boot May 16 23:47:36.692331 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 16 23:47:36.692351 systemd-tmpfiles[1251]: Skipping /boot May 16 23:47:36.721808 zram_generator::config[1281]: No configuration found. May 16 23:47:36.802592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 23:47:36.837501 systemd[1]: Reloading finished in 166 ms. May 16 23:47:36.853713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 23:47:36.854954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 23:47:36.869847 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 23:47:36.871943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 23:47:36.873764 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 23:47:36.879019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 23:47:36.882075 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 23:47:36.887421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 23:47:36.903191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 23:47:36.917415 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 23:47:36.920230 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 23:47:36.926455 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 23:47:36.928060 systemd-udevd[1324]: Using default interface naming scheme 'v255'. May 16 23:47:36.928112 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 23:47:36.931618 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 23:47:36.934739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 23:47:36.941165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 23:47:36.944142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 23:47:36.947100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 23:47:36.950594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 23:47:36.950716 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 23:47:36.951262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 23:47:36.952928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 23:47:36.953070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 23:47:36.956187 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 23:47:36.956316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 23:47:36.960123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 23:47:36.960255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 23:47:36.961265 augenrules[1358]: No rules May 16 23:47:36.965099 systemd[1]: audit-rules.service: Deactivated successfully. May 16 23:47:36.965260 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 23:47:36.967961 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 23:47:36.971491 systemd[1]: Finished ensure-sysext.service. May 16 23:47:36.977002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 23:47:36.982234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 23:47:36.986976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 23:47:36.990010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 23:47:36.991126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 23:47:36.994092 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 23:47:36.997072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 23:47:36.999648 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 23:47:37.000522 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 23:47:37.001049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 23:47:37.001194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 23:47:37.002427 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 23:47:37.002579 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 23:47:37.003692 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 23:47:37.003837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 23:47:37.007637 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 23:47:37.015736 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 23:47:37.034832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1351) May 16 23:47:37.054680 systemd-resolved[1317]: Positive Trust Anchors: May 16 23:47:37.054761 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 23:47:37.054820 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 23:47:37.067830 systemd-resolved[1317]: Defaulting to hostname 'linux'. May 16 23:47:37.069369 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 23:47:37.074727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 23:47:37.078083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 23:47:37.087010 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 23:47:37.088173 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 23:47:37.089637 systemd[1]: Reached target time-set.target - System Time Set. May 16 23:47:37.091872 systemd-networkd[1386]: lo: Link UP May 16 23:47:37.091881 systemd-networkd[1386]: lo: Gained carrier May 16 23:47:37.092683 systemd-networkd[1386]: Enumeration completed May 16 23:47:37.092886 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 23:47:37.093847 systemd[1]: Reached target network.target - Network. May 16 23:47:37.097064 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 23:47:37.097074 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 23:47:37.098133 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 23:47:37.099903 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 23:47:37.099940 systemd-networkd[1386]: eth0: Link UP May 16 23:47:37.099942 systemd-networkd[1386]: eth0: Gained carrier May 16 23:47:37.099951 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 23:47:37.106871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 23:47:37.111866 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 23:47:37.113870 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 16 23:47:37.114901 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 23:47:37.114953 systemd-timesyncd[1387]: Initial clock synchronization to Fri 2025-05-16 23:47:37.243807 UTC. May 16 23:47:37.137082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 23:47:37.145861 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 23:47:37.148429 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 23:47:37.175748 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 23:47:37.185857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 23:47:37.209328 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 23:47:37.210503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 23:47:37.211437 systemd[1]: Reached target sysinit.target - System Initialization. May 16 23:47:37.212339 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 23:47:37.213255 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 23:47:37.214301 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 23:47:37.215189 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 23:47:37.216096 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 23:47:37.216966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 23:47:37.216998 systemd[1]: Reached target paths.target - Path Units. May 16 23:47:37.217649 systemd[1]: Reached target timers.target - Timer Units. May 16 23:47:37.219289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 23:47:37.221347 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 23:47:37.231695 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 23:47:37.233608 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 23:47:37.234975 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 23:47:37.235888 systemd[1]: Reached target sockets.target - Socket Units. May 16 23:47:37.236571 systemd[1]: Reached target basic.target - Basic System. May 16 23:47:37.237290 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 23:47:37.237318 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 23:47:37.241177 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 23:47:37.241851 systemd[1]: Starting containerd.service - containerd container runtime... May 16 23:47:37.243569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 23:47:37.247978 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 23:47:37.250287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 23:47:37.251299 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 23:47:37.252672 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 23:47:37.257391 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 23:47:37.259981 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 23:47:37.263891 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 23:47:37.265249 jq[1419]: false May 16 23:47:37.267870 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 23:47:37.269293 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 23:47:37.269679 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 23:47:37.270949 systemd[1]: Starting update-engine.service - Update Engine... May 16 23:47:37.274240 extend-filesystems[1420]: Found loop3 May 16 23:47:37.275059 extend-filesystems[1420]: Found loop4 May 16 23:47:37.275059 extend-filesystems[1420]: Found loop5 May 16 23:47:37.275059 extend-filesystems[1420]: Found vda May 16 23:47:37.275059 extend-filesystems[1420]: Found vda1 May 16 23:47:37.275059 extend-filesystems[1420]: Found vda2 May 16 23:47:37.275059 extend-filesystems[1420]: Found vda3 May 16 23:47:37.275059 extend-filesystems[1420]: Found usr May 16 23:47:37.275059 extend-filesystems[1420]: Found vda4 May 16 23:47:37.275059 extend-filesystems[1420]: Found vda6 May 16 23:47:37.275059 extend-filesystems[1420]: Found vda7 May 16 23:47:37.275059 extend-filesystems[1420]: Found vda9 May 16 23:47:37.275059 extend-filesystems[1420]: Checking size of /dev/vda9 May 16 23:47:37.307307 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 23:47:37.307351 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1352) May 16 23:47:37.274871 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 23:47:37.308899 extend-filesystems[1420]: Resized partition /dev/vda9 May 16 23:47:37.284642 dbus-daemon[1418]: [system] SELinux support is enabled May 16 23:47:37.276677 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 23:47:37.310013 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) May 16 23:47:37.278667 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 23:47:37.278958 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 23:47:37.280147 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 23:47:37.316847 jq[1430]: true May 16 23:47:37.280879 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 23:47:37.286667 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 23:47:37.291557 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 23:47:37.291602 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 23:47:37.293933 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 23:47:37.293961 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 23:47:37.308840 systemd[1]: motdgen.service: Deactivated successfully. May 16 23:47:37.309104 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 23:47:37.316135 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 23:47:37.323380 jq[1450]: true May 16 23:47:37.330677 update_engine[1428]: I20250516 23:47:37.330538 1428 main.cc:92] Flatcar Update Engine starting May 16 23:47:37.333391 tar[1434]: linux-arm64/LICENSE May 16 23:47:37.333611 tar[1434]: linux-arm64/helm May 16 23:47:37.336894 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 23:47:37.338583 systemd[1]: Started update-engine.service - Update Engine. May 16 23:47:37.338968 update_engine[1428]: I20250516 23:47:37.338585 1428 update_check_scheduler.cc:74] Next update check in 3m26s May 16 23:47:37.347125 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 23:47:37.347125 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 23:47:37.347125 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 23:47:37.354818 extend-filesystems[1420]: Resized filesystem in /dev/vda9 May 16 23:47:37.348448 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 23:47:37.355872 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 23:47:37.357837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 23:47:37.372730 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) May 16 23:47:37.373354 systemd-logind[1426]: New seat seat0. May 16 23:47:37.374510 systemd[1]: Started systemd-logind.service - User Login Management. May 16 23:47:37.401005 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 23:47:37.420141 bash[1474]: Updated "/home/core/.ssh/authorized_keys" May 16 23:47:37.421669 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 23:47:37.424543 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 23:47:37.538553 containerd[1448]: time="2025-05-16T23:47:37.538456440Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 23:47:37.569881 containerd[1448]: time="2025-05-16T23:47:37.569718240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571211200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571251240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571266920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571410960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571426560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571476000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 23:47:37.571567 containerd[1448]: time="2025-05-16T23:47:37.571488520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.571719 containerd[1448]: time="2025-05-16T23:47:37.571650320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 23:47:37.571719 containerd[1448]: time="2025-05-16T23:47:37.571665760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.571719 containerd[1448]: time="2025-05-16T23:47:37.571678760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 23:47:37.571719 containerd[1448]: time="2025-05-16T23:47:37.571687520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.571809 containerd[1448]: time="2025-05-16T23:47:37.571756640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.572008 containerd[1448]: time="2025-05-16T23:47:37.571967360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 23:47:37.572104 containerd[1448]: time="2025-05-16T23:47:37.572078360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 23:47:37.572104 containerd[1448]: time="2025-05-16T23:47:37.572096600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 23:47:37.572209 containerd[1448]: time="2025-05-16T23:47:37.572179640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 23:47:37.572241 containerd[1448]: time="2025-05-16T23:47:37.572227960Z" level=info msg="metadata content store policy set" policy=shared May 16 23:47:37.591083 containerd[1448]: time="2025-05-16T23:47:37.591018640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 23:47:37.591083 containerd[1448]: time="2025-05-16T23:47:37.591071960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 23:47:37.591083 containerd[1448]: time="2025-05-16T23:47:37.591089320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 23:47:37.591290 containerd[1448]: time="2025-05-16T23:47:37.591104480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 23:47:37.591290 containerd[1448]: time="2025-05-16T23:47:37.591118120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 23:47:37.591290 containerd[1448]: time="2025-05-16T23:47:37.591265280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591608320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591750160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591767120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591782440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591814760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591827840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591841440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591854360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591868520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591887560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591899440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591910160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591929160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593266 containerd[1448]: time="2025-05-16T23:47:37.591942720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.591954400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.591966440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.591979440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.591991480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592002200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592013840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592027200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592040600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592051720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592062640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592075560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592089400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592108920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592121480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593589 containerd[1448]: time="2025-05-16T23:47:37.592132360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592316080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592334040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592343880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592356480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592365120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592379160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592389240Z" level=info msg="NRI interface is disabled by configuration." May 16 23:47:37.593875 containerd[1448]: time="2025-05-16T23:47:37.592399000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 23:47:37.594032 containerd[1448]: time="2025-05-16T23:47:37.592686960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 23:47:37.594032 containerd[1448]: time="2025-05-16T23:47:37.592735520Z" level=info msg="Connect containerd service" May 16 23:47:37.594032 containerd[1448]: time="2025-05-16T23:47:37.592763680Z" level=info msg="using legacy CRI server" May 16 23:47:37.594032 containerd[1448]: time="2025-05-16T23:47:37.592770320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 23:47:37.594032 containerd[1448]: time="2025-05-16T23:47:37.593020240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 23:47:37.596651 containerd[1448]: time="2025-05-16T23:47:37.596621040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 23:47:37.597025 containerd[1448]: time="2025-05-16T23:47:37.596994040Z" level=info msg="Start subscribing containerd event" May 16 23:47:37.597117 containerd[1448]: time="2025-05-16T23:47:37.597102480Z" level=info msg="Start recovering state" May 16 23:47:37.597258 containerd[1448]: time="2025-05-16T23:47:37.597236560Z" level=info msg="Start event monitor" May 16 23:47:37.597404 containerd[1448]: time="2025-05-16T23:47:37.597389760Z" level=info msg="Start snapshots syncer" May 16 23:47:37.597504 containerd[1448]: time="2025-05-16T23:47:37.597490920Z" level=info msg="Start cni network conf syncer for default" May 16 23:47:37.597714 containerd[1448]: time="2025-05-16T23:47:37.597700120Z" level=info msg="Start streaming server" May 16 23:47:37.598257 containerd[1448]: time="2025-05-16T23:47:37.598240280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 23:47:37.598371 containerd[1448]: time="2025-05-16T23:47:37.598357960Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 23:47:37.598703 systemd[1]: Started containerd.service - containerd container runtime. May 16 23:47:37.600247 containerd[1448]: time="2025-05-16T23:47:37.600225880Z" level=info msg="containerd successfully booted in 0.062988s" May 16 23:47:37.739211 tar[1434]: linux-arm64/README.md May 16 23:47:37.752200 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 23:47:37.823951 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 23:47:37.841962 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 23:47:37.858099 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 23:47:37.863371 systemd[1]: issuegen.service: Deactivated successfully. May 16 23:47:37.864826 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 23:47:37.867049 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 23:47:37.878416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 23:47:37.882045 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 23:47:37.883918 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 23:47:37.884957 systemd[1]: Reached target getty.target - Login Prompts. May 16 23:47:39.102957 systemd-networkd[1386]: eth0: Gained IPv6LL May 16 23:47:39.109552 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 23:47:39.113454 systemd[1]: Reached target network-online.target - Network is Online. May 16 23:47:39.125059 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 23:47:39.127357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:47:39.129249 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 23:47:39.144430 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 23:47:39.145867 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 23:47:39.147524 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 23:47:39.152698 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 23:47:39.705374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:47:39.707042 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 23:47:39.709927 systemd[1]: Startup finished in 553ms (kernel) + 4.916s (initrd) + 4.128s (userspace) = 9.599s. May 16 23:47:39.712168 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 23:47:40.205464 kubelet[1530]: E0516 23:47:40.205334 1530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 23:47:40.207126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 23:47:40.207302 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 23:47:43.615348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 23:47:43.616432 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:36736.service - OpenSSH per-connection server daemon (10.0.0.1:36736). May 16 23:47:43.680695 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 36736 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:43.682222 sshd-session[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:43.696029 systemd-logind[1426]: New session 1 of user core. May 16 23:47:43.697045 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 23:47:43.709137 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 23:47:43.718381 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 23:47:43.720547 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 23:47:43.728504 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 23:47:43.809576 systemd[1549]: Queued start job for default target default.target. May 16 23:47:43.819080 systemd[1549]: Created slice app.slice - User Application Slice. May 16 23:47:43.819113 systemd[1549]: Reached target paths.target - Paths. May 16 23:47:43.819125 systemd[1549]: Reached target timers.target - Timers. May 16 23:47:43.820347 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 23:47:43.829859 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 23:47:43.829917 systemd[1549]: Reached target sockets.target - Sockets. May 16 23:47:43.829929 systemd[1549]: Reached target basic.target - Basic System. May 16 23:47:43.829962 systemd[1549]: Reached target default.target - Main User Target. May 16 23:47:43.829989 systemd[1549]: Startup finished in 96ms. May 16 23:47:43.830210 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 23:47:43.836946 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 23:47:43.895236 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:36740.service - OpenSSH per-connection server daemon (10.0.0.1:36740). May 16 23:47:43.946276 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 36740 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:43.947664 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:43.951701 systemd-logind[1426]: New session 2 of user core. May 16 23:47:43.960979 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 23:47:44.013738 sshd[1562]: Connection closed by 10.0.0.1 port 36740 May 16 23:47:44.013247 sshd-session[1560]: pam_unix(sshd:session): session closed for user core May 16 23:47:44.021150 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:36740.service: Deactivated successfully. May 16 23:47:44.023136 systemd[1]: session-2.scope: Deactivated successfully. May 16 23:47:44.024428 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. May 16 23:47:44.032087 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:36754.service - OpenSSH per-connection server daemon (10.0.0.1:36754). May 16 23:47:44.033203 systemd-logind[1426]: Removed session 2. May 16 23:47:44.067887 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 36754 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:44.069088 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:44.073288 systemd-logind[1426]: New session 3 of user core. May 16 23:47:44.088984 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 23:47:44.136617 sshd[1569]: Connection closed by 10.0.0.1 port 36754 May 16 23:47:44.137094 sshd-session[1567]: pam_unix(sshd:session): session closed for user core May 16 23:47:44.147397 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:36754.service: Deactivated successfully. May 16 23:47:44.149253 systemd[1]: session-3.scope: Deactivated successfully. May 16 23:47:44.151961 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. May 16 23:47:44.153104 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:36762.service - OpenSSH per-connection server daemon (10.0.0.1:36762). May 16 23:47:44.156057 systemd-logind[1426]: Removed session 3. May 16 23:47:44.193441 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 36762 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:44.194637 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:44.198852 systemd-logind[1426]: New session 4 of user core. May 16 23:47:44.209974 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 23:47:44.265396 sshd[1576]: Connection closed by 10.0.0.1 port 36762 May 16 23:47:44.265877 sshd-session[1574]: pam_unix(sshd:session): session closed for user core May 16 23:47:44.276202 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:36762.service: Deactivated successfully. May 16 23:47:44.277509 systemd[1]: session-4.scope: Deactivated successfully. May 16 23:47:44.279951 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. May 16 23:47:44.281094 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:36766.service - OpenSSH per-connection server daemon (10.0.0.1:36766). May 16 23:47:44.281773 systemd-logind[1426]: Removed session 4. May 16 23:47:44.320347 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 36766 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:44.321557 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:44.325804 systemd-logind[1426]: New session 5 of user core. May 16 23:47:44.337962 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 23:47:44.401443 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 23:47:44.401727 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 23:47:44.417780 sudo[1584]: pam_unix(sudo:session): session closed for user root May 16 23:47:44.419844 sshd[1583]: Connection closed by 10.0.0.1 port 36766 May 16 23:47:44.420449 sshd-session[1581]: pam_unix(sshd:session): session closed for user core May 16 23:47:44.427565 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:36766.service: Deactivated successfully. May 16 23:47:44.431077 systemd[1]: session-5.scope: Deactivated successfully. May 16 23:47:44.432326 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. May 16 23:47:44.433702 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:36774.service - OpenSSH per-connection server daemon (10.0.0.1:36774). May 16 23:47:44.436659 systemd-logind[1426]: Removed session 5. May 16 23:47:44.474796 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 36774 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:44.476182 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:44.479838 systemd-logind[1426]: New session 6 of user core. May 16 23:47:44.490979 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 23:47:44.543139 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 23:47:44.543726 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 23:47:44.547042 sudo[1593]: pam_unix(sudo:session): session closed for user root May 16 23:47:44.552154 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 23:47:44.552427 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 23:47:44.572125 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 23:47:44.595543 augenrules[1615]: No rules May 16 23:47:44.597010 systemd[1]: audit-rules.service: Deactivated successfully. May 16 23:47:44.597194 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 23:47:44.598450 sudo[1592]: pam_unix(sudo:session): session closed for user root May 16 23:47:44.599659 sshd[1591]: Connection closed by 10.0.0.1 port 36774 May 16 23:47:44.600090 sshd-session[1589]: pam_unix(sshd:session): session closed for user core May 16 23:47:44.609207 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:36774.service: Deactivated successfully. May 16 23:47:44.610657 systemd[1]: session-6.scope: Deactivated successfully. May 16 23:47:44.612050 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. May 16 23:47:44.613222 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:36780.service - OpenSSH per-connection server daemon (10.0.0.1:36780). May 16 23:47:44.613995 systemd-logind[1426]: Removed session 6. May 16 23:47:44.653292 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 36780 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:47:44.655345 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:47:44.659614 systemd-logind[1426]: New session 7 of user core. May 16 23:47:44.673975 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 23:47:44.726163 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 23:47:44.726462 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 23:47:45.059227 (dockerd)[1647]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 23:47:45.059291 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 23:47:45.327256 dockerd[1647]: time="2025-05-16T23:47:45.327126799Z" level=info msg="Starting up" May 16 23:47:45.484213 dockerd[1647]: time="2025-05-16T23:47:45.484169086Z" level=info msg="Loading containers: start." May 16 23:47:45.618828 kernel: Initializing XFRM netlink socket May 16 23:47:45.690301 systemd-networkd[1386]: docker0: Link UP May 16 23:47:45.730933 dockerd[1647]: time="2025-05-16T23:47:45.730874985Z" level=info msg="Loading containers: done." May 16 23:47:45.745086 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2649304746-merged.mount: Deactivated successfully. May 16 23:47:45.749015 dockerd[1647]: time="2025-05-16T23:47:45.748977767Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 23:47:45.749091 dockerd[1647]: time="2025-05-16T23:47:45.749060872Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 16 23:47:45.749174 dockerd[1647]: time="2025-05-16T23:47:45.749156078Z" level=info msg="Daemon has completed initialization" May 16 23:47:45.775707 dockerd[1647]: time="2025-05-16T23:47:45.775594095Z" level=info msg="API listen on /run/docker.sock" May 16 23:47:45.775829 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 23:47:46.343392 containerd[1448]: time="2025-05-16T23:47:46.343335920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 23:47:47.021109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553253536.mount: Deactivated successfully. May 16 23:47:47.905240 containerd[1448]: time="2025-05-16T23:47:47.905185066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:47.906334 containerd[1448]: time="2025-05-16T23:47:47.906294608Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 16 23:47:47.907198 containerd[1448]: time="2025-05-16T23:47:47.907170058Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:47.910629 containerd[1448]: time="2025-05-16T23:47:47.910598854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:47.912461 containerd[1448]: time="2025-05-16T23:47:47.912286993Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.568906101s" May 16 23:47:47.912461 containerd[1448]: time="2025-05-16T23:47:47.912321106Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 16 23:47:47.912999 containerd[1448]: time="2025-05-16T23:47:47.912969940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 23:47:48.921830 containerd[1448]: time="2025-05-16T23:47:48.921771062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:48.922319 containerd[1448]: time="2025-05-16T23:47:48.922269199Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 16 23:47:48.923093 containerd[1448]: time="2025-05-16T23:47:48.923069850Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:48.926814 containerd[1448]: time="2025-05-16T23:47:48.926770432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:48.927879 containerd[1448]: time="2025-05-16T23:47:48.927847406Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.014766009s" May 16 23:47:48.927879 containerd[1448]: time="2025-05-16T23:47:48.927877855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 16 23:47:48.928502 containerd[1448]: time="2025-05-16T23:47:48.928320559Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 23:47:49.920147 containerd[1448]: time="2025-05-16T23:47:49.920079197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:49.920585 containerd[1448]: time="2025-05-16T23:47:49.920532346Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 16 23:47:49.921679 containerd[1448]: time="2025-05-16T23:47:49.921646144Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:49.925247 containerd[1448]: time="2025-05-16T23:47:49.925200779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:49.926326 containerd[1448]: time="2025-05-16T23:47:49.926289116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 997.933894ms" May 16 23:47:49.926326 containerd[1448]: time="2025-05-16T23:47:49.926322287Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 16 23:47:49.926734 containerd[1448]: time="2025-05-16T23:47:49.926698212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 23:47:50.457559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 23:47:50.471751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:47:50.573415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:47:50.577215 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 23:47:50.692899 kubelet[1914]: E0516 23:47:50.692862 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 23:47:50.695910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 23:47:50.696063 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 23:47:50.938146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144719705.mount: Deactivated successfully. May 16 23:47:51.288756 containerd[1448]: time="2025-05-16T23:47:51.288250393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:51.288756 containerd[1448]: time="2025-05-16T23:47:51.288681811Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 16 23:47:51.289895 containerd[1448]: time="2025-05-16T23:47:51.289858577Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:51.291951 containerd[1448]: time="2025-05-16T23:47:51.291898218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:51.292781 containerd[1448]: time="2025-05-16T23:47:51.292749855Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.366016228s" May 16 23:47:51.292781 containerd[1448]: time="2025-05-16T23:47:51.292782328Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 16 23:47:51.293620 containerd[1448]: time="2025-05-16T23:47:51.293592661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 23:47:51.785108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331634000.mount: Deactivated successfully. May 16 23:47:52.371662 containerd[1448]: time="2025-05-16T23:47:52.371615126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:52.372593 containerd[1448]: time="2025-05-16T23:47:52.372509838Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 16 23:47:52.373380 containerd[1448]: time="2025-05-16T23:47:52.373348890Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:52.376321 containerd[1448]: time="2025-05-16T23:47:52.376260449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:52.377891 containerd[1448]: time="2025-05-16T23:47:52.377862946Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.084232155s" May 16 23:47:52.378082 containerd[1448]: time="2025-05-16T23:47:52.377975874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 23:47:52.378563 containerd[1448]: time="2025-05-16T23:47:52.378488703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 23:47:52.797364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252345960.mount: Deactivated successfully. May 16 23:47:52.802137 containerd[1448]: time="2025-05-16T23:47:52.802086710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:52.802627 containerd[1448]: time="2025-05-16T23:47:52.802569682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 16 23:47:52.803385 containerd[1448]: time="2025-05-16T23:47:52.803353675Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:52.805493 containerd[1448]: time="2025-05-16T23:47:52.805454715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:52.806476 containerd[1448]: time="2025-05-16T23:47:52.806434384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 427.752934ms" May 16 23:47:52.806476 containerd[1448]: time="2025-05-16T23:47:52.806470582Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 23:47:52.807293 containerd[1448]: time="2025-05-16T23:47:52.807244422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 23:47:53.396032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541459516.mount: Deactivated successfully. May 16 23:47:54.759308 containerd[1448]: time="2025-05-16T23:47:54.759068977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:54.760237 containerd[1448]: time="2025-05-16T23:47:54.759970036Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 16 23:47:54.760969 containerd[1448]: time="2025-05-16T23:47:54.760852683Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:54.765366 containerd[1448]: time="2025-05-16T23:47:54.765300936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:47:54.766863 containerd[1448]: time="2025-05-16T23:47:54.766824136Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.959538664s" May 16 23:47:54.766898 containerd[1448]: time="2025-05-16T23:47:54.766861724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 16 23:47:59.651957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:47:59.666048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:47:59.687676 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-7.scope)... May 16 23:47:59.687692 systemd[1]: Reloading... May 16 23:47:59.758831 zram_generator::config[2113]: No configuration found. May 16 23:47:59.876788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 23:47:59.929374 systemd[1]: Reloading finished in 241 ms. May 16 23:47:59.977762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:47:59.979173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:47:59.981996 systemd[1]: kubelet.service: Deactivated successfully. May 16 23:47:59.982184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:47:59.983752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:48:00.091468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:48:00.095483 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 23:48:00.131103 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 23:48:00.131103 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 23:48:00.131103 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 23:48:00.131446 kubelet[2157]: I0516 23:48:00.131152 2157 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 23:48:02.075296 kubelet[2157]: I0516 23:48:02.075245 2157 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 23:48:02.075296 kubelet[2157]: I0516 23:48:02.075279 2157 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 23:48:02.075654 kubelet[2157]: I0516 23:48:02.075550 2157 server.go:954] "Client rotation is on, will bootstrap in background" May 16 23:48:02.105976 kubelet[2157]: E0516 23:48:02.105932 2157 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:02.109089 kubelet[2157]: I0516 23:48:02.108927 2157 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 23:48:02.119326 kubelet[2157]: E0516 23:48:02.119289 2157 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 23:48:02.119326 kubelet[2157]: I0516 23:48:02.119326 2157 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 23:48:02.122389 kubelet[2157]: I0516 23:48:02.122369 2157 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 23:48:02.122605 kubelet[2157]: I0516 23:48:02.122582 2157 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 23:48:02.122776 kubelet[2157]: I0516 23:48:02.122606 2157 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 23:48:02.122877 kubelet[2157]: I0516 23:48:02.122859 2157 topology_manager.go:138] "Creating topology manager with none policy" May 16 23:48:02.122877 kubelet[2157]: I0516 23:48:02.122869 2157 container_manager_linux.go:304] "Creating device plugin manager" May 16 23:48:02.123084 kubelet[2157]: I0516 23:48:02.123057 2157 state_mem.go:36] "Initialized new in-memory state store" May 16 23:48:02.125383 kubelet[2157]: I0516 23:48:02.125355 2157 kubelet.go:446] "Attempting to sync node with API server" May 16 23:48:02.125383 kubelet[2157]: I0516 23:48:02.125380 2157 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 23:48:02.125447 kubelet[2157]: I0516 23:48:02.125400 2157 kubelet.go:352] "Adding apiserver pod source" May 16 23:48:02.125447 kubelet[2157]: I0516 23:48:02.125410 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 23:48:02.127896 kubelet[2157]: W0516 23:48:02.127850 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:02.127960 kubelet[2157]: E0516 23:48:02.127912 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:02.128509 kubelet[2157]: I0516 23:48:02.128443 2157 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 23:48:02.128930 kubelet[2157]: W0516 23:48:02.128901 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:02.128968 kubelet[2157]: E0516 23:48:02.128942 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:02.129283 kubelet[2157]: I0516 23:48:02.129272 2157 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 23:48:02.129417 kubelet[2157]: W0516 23:48:02.129405 2157 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 23:48:02.131236 kubelet[2157]: I0516 23:48:02.130450 2157 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 23:48:02.131236 kubelet[2157]: I0516 23:48:02.130498 2157 server.go:1287] "Started kubelet" May 16 23:48:02.131236 kubelet[2157]: I0516 23:48:02.130557 2157 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 23:48:02.131450 kubelet[2157]: I0516 23:48:02.131398 2157 server.go:479] "Adding debug handlers to kubelet server" May 16 23:48:02.132330 kubelet[2157]: I0516 23:48:02.132274 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 23:48:02.133300 kubelet[2157]: I0516 23:48:02.133268 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 23:48:02.133520 kubelet[2157]: I0516 23:48:02.133502 2157 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 23:48:02.134663 kubelet[2157]: I0516 23:48:02.134461 2157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 23:48:02.135802 kubelet[2157]: E0516 23:48:02.135753 2157 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 23:48:02.135802 kubelet[2157]: I0516 23:48:02.135801 2157 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 23:48:02.136022 kubelet[2157]: I0516 23:48:02.135996 2157 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 23:48:02.136065 kubelet[2157]: I0516 23:48:02.136052 2157 reconciler.go:26] "Reconciler: start to sync state" May 16 23:48:02.136092 kubelet[2157]: E0516 23:48:02.135846 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184026c4cdf33b9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 23:48:02.130467738 +0000 UTC m=+2.032077910,LastTimestamp:2025-05-16 23:48:02.130467738 +0000 UTC m=+2.032077910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 23:48:02.136445 kubelet[2157]: W0516 23:48:02.136400 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:02.136494 kubelet[2157]: E0516 23:48:02.136448 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:02.136864 kubelet[2157]: I0516 23:48:02.136803 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 23:48:02.137172 kubelet[2157]: E0516 23:48:02.137147 2157 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 23:48:02.137353 kubelet[2157]: E0516 23:48:02.137323 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" May 16 23:48:02.138163 kubelet[2157]: I0516 23:48:02.138138 2157 factory.go:221] Registration of the containerd container factory successfully May 16 23:48:02.138163 kubelet[2157]: I0516 23:48:02.138160 2157 factory.go:221] Registration of the systemd container factory successfully May 16 23:48:02.150081 kubelet[2157]: I0516 23:48:02.149915 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 23:48:02.151324 kubelet[2157]: I0516 23:48:02.151290 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 23:48:02.151324 kubelet[2157]: I0516 23:48:02.151313 2157 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 23:48:02.151451 kubelet[2157]: I0516 23:48:02.151332 2157 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 23:48:02.151451 kubelet[2157]: I0516 23:48:02.151338 2157 kubelet.go:2382] "Starting kubelet main sync loop" May 16 23:48:02.151451 kubelet[2157]: E0516 23:48:02.151373 2157 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 23:48:02.153298 kubelet[2157]: I0516 23:48:02.153257 2157 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 23:48:02.153298 kubelet[2157]: I0516 23:48:02.153276 2157 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 23:48:02.153382 kubelet[2157]: I0516 23:48:02.153306 2157 state_mem.go:36] "Initialized new in-memory state store" May 16 23:48:02.153636 kubelet[2157]: W0516 23:48:02.153549 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:02.153636 kubelet[2157]: E0516 23:48:02.153613 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:02.236390 kubelet[2157]: E0516 23:48:02.236354 2157 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 23:48:02.251545 kubelet[2157]: E0516 23:48:02.251509 2157 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 23:48:02.298205 kubelet[2157]: I0516 23:48:02.298163 2157 policy_none.go:49] "None policy: Start" May 16 23:48:02.298205 kubelet[2157]: I0516 23:48:02.298195 2157 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 23:48:02.298205 kubelet[2157]: I0516 23:48:02.298209 2157 state_mem.go:35] "Initializing new in-memory state store" May 16 23:48:02.302965 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 23:48:02.321401 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 23:48:02.324384 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 23:48:02.332613 kubelet[2157]: I0516 23:48:02.332520 2157 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 23:48:02.332741 kubelet[2157]: I0516 23:48:02.332716 2157 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 23:48:02.332991 kubelet[2157]: I0516 23:48:02.332737 2157 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 23:48:02.333221 kubelet[2157]: I0516 23:48:02.333186 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 23:48:02.333970 kubelet[2157]: E0516 23:48:02.333896 2157 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 23:48:02.334043 kubelet[2157]: E0516 23:48:02.333983 2157 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 23:48:02.338082 kubelet[2157]: E0516 23:48:02.338046 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" May 16 23:48:02.434428 kubelet[2157]: I0516 23:48:02.434395 2157 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 23:48:02.434897 kubelet[2157]: E0516 23:48:02.434855 2157 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" May 16 23:48:02.459607 systemd[1]: Created slice kubepods-burstable-pod90f445e3250a8e54529818b48469f760.slice - libcontainer container kubepods-burstable-pod90f445e3250a8e54529818b48469f760.slice. May 16 23:48:02.480146 kubelet[2157]: E0516 23:48:02.480120 2157 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 23:48:02.482044 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 23:48:02.483861 kubelet[2157]: E0516 23:48:02.483838 2157 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 23:48:02.485645 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 23:48:02.487004 kubelet[2157]: E0516 23:48:02.486984 2157 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 23:48:02.636926 kubelet[2157]: I0516 23:48:02.636784 2157 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 23:48:02.637257 kubelet[2157]: E0516 23:48:02.637213 2157 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" May 16 23:48:02.637351 kubelet[2157]: I0516 23:48:02.637254 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:02.637351 kubelet[2157]: I0516 23:48:02.637289 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 23:48:02.637351 kubelet[2157]: I0516 23:48:02.637309 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90f445e3250a8e54529818b48469f760-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90f445e3250a8e54529818b48469f760\") " pod="kube-system/kube-apiserver-localhost" May 16 23:48:02.637351 kubelet[2157]: I0516 23:48:02.637323 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:02.637351 kubelet[2157]: I0516 23:48:02.637345 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:02.637658 kubelet[2157]: I0516 23:48:02.637362 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:02.637658 kubelet[2157]: I0516 23:48:02.637505 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:02.637658 kubelet[2157]: I0516 23:48:02.637528 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90f445e3250a8e54529818b48469f760-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90f445e3250a8e54529818b48469f760\") " pod="kube-system/kube-apiserver-localhost" May 16 23:48:02.637658 kubelet[2157]: I0516 23:48:02.637542 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90f445e3250a8e54529818b48469f760-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90f445e3250a8e54529818b48469f760\") " pod="kube-system/kube-apiserver-localhost" May 16 23:48:02.738523 kubelet[2157]: E0516 23:48:02.738475 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" May 16 23:48:02.780969 kubelet[2157]: E0516 23:48:02.780943 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:02.781710 containerd[1448]: time="2025-05-16T23:48:02.781624325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90f445e3250a8e54529818b48469f760,Namespace:kube-system,Attempt:0,}" May 16 23:48:02.784596 kubelet[2157]: E0516 23:48:02.784348 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:02.784775 containerd[1448]: time="2025-05-16T23:48:02.784706155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 23:48:02.788128 kubelet[2157]: E0516 23:48:02.788042 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:02.788572 containerd[1448]: time="2025-05-16T23:48:02.788364051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 23:48:03.038940 kubelet[2157]: I0516 23:48:03.038836 2157 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 23:48:03.039597 kubelet[2157]: E0516 23:48:03.039504 2157 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" May 16 23:48:03.197485 kubelet[2157]: W0516 23:48:03.197423 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:03.197849 kubelet[2157]: E0516 23:48:03.197489 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:03.208102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559167236.mount: Deactivated successfully. May 16 23:48:03.211174 containerd[1448]: time="2025-05-16T23:48:03.211116499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 23:48:03.213255 containerd[1448]: time="2025-05-16T23:48:03.213213140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 16 23:48:03.214679 containerd[1448]: time="2025-05-16T23:48:03.214372920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 23:48:03.215379 containerd[1448]: time="2025-05-16T23:48:03.215347963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 23:48:03.216184 containerd[1448]: time="2025-05-16T23:48:03.216120482Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 23:48:03.218137 containerd[1448]: time="2025-05-16T23:48:03.218084191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 23:48:03.219015 containerd[1448]: time="2025-05-16T23:48:03.218890844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 23:48:03.221479 containerd[1448]: time="2025-05-16T23:48:03.221417215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 23:48:03.224390 containerd[1448]: time="2025-05-16T23:48:03.223817383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 442.086159ms" May 16 23:48:03.225457 containerd[1448]: time="2025-05-16T23:48:03.225422797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 440.614909ms" May 16 23:48:03.225618 containerd[1448]: time="2025-05-16T23:48:03.225592910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 437.169919ms" May 16 23:48:03.300062 kubelet[2157]: W0516 23:48:03.299950 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:03.300062 kubelet[2157]: E0516 23:48:03.299997 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:03.368025 containerd[1448]: time="2025-05-16T23:48:03.367824844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:03.368025 containerd[1448]: time="2025-05-16T23:48:03.367912545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:03.368025 containerd[1448]: time="2025-05-16T23:48:03.367924404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:03.368540 containerd[1448]: time="2025-05-16T23:48:03.368444277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:03.368540 containerd[1448]: time="2025-05-16T23:48:03.368198043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:03.369174 containerd[1448]: time="2025-05-16T23:48:03.369101371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:03.369174 containerd[1448]: time="2025-05-16T23:48:03.369137669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:03.369249 containerd[1448]: time="2025-05-16T23:48:03.369214953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:03.372158 containerd[1448]: time="2025-05-16T23:48:03.371988560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:03.372158 containerd[1448]: time="2025-05-16T23:48:03.372100139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:03.372158 containerd[1448]: time="2025-05-16T23:48:03.372131710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:03.372537 containerd[1448]: time="2025-05-16T23:48:03.372382792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:03.393972 systemd[1]: Started cri-containerd-18063ee4097eb2641430b37e83b23a30be9c4e46770d4f8caba423e7fc5cd20d.scope - libcontainer container 18063ee4097eb2641430b37e83b23a30be9c4e46770d4f8caba423e7fc5cd20d. May 16 23:48:03.395295 systemd[1]: Started cri-containerd-3817eabfc77d360371c39447ac618a8e5ba3523f102489ae27e1be77664d5155.scope - libcontainer container 3817eabfc77d360371c39447ac618a8e5ba3523f102489ae27e1be77664d5155. May 16 23:48:03.396840 systemd[1]: Started cri-containerd-3977e9c66d6e60e0a36beadd07141b7f7da1b80ec85b840e506e3b47b3f670bc.scope - libcontainer container 3977e9c66d6e60e0a36beadd07141b7f7da1b80ec85b840e506e3b47b3f670bc. May 16 23:48:03.426859 containerd[1448]: time="2025-05-16T23:48:03.426817753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"18063ee4097eb2641430b37e83b23a30be9c4e46770d4f8caba423e7fc5cd20d\"" May 16 23:48:03.428851 kubelet[2157]: E0516 23:48:03.428720 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:03.429788 containerd[1448]: time="2025-05-16T23:48:03.429750696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90f445e3250a8e54529818b48469f760,Namespace:kube-system,Attempt:0,} returns sandbox id \"3817eabfc77d360371c39447ac618a8e5ba3523f102489ae27e1be77664d5155\"" May 16 23:48:03.431278 kubelet[2157]: E0516 23:48:03.430838 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:03.432013 containerd[1448]: time="2025-05-16T23:48:03.431977546Z" level=info msg="CreateContainer within sandbox \"18063ee4097eb2641430b37e83b23a30be9c4e46770d4f8caba423e7fc5cd20d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 23:48:03.432745 containerd[1448]: time="2025-05-16T23:48:03.432682797Z" level=info msg="CreateContainer within sandbox \"3817eabfc77d360371c39447ac618a8e5ba3523f102489ae27e1be77664d5155\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 23:48:03.439743 containerd[1448]: time="2025-05-16T23:48:03.439703895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3977e9c66d6e60e0a36beadd07141b7f7da1b80ec85b840e506e3b47b3f670bc\"" May 16 23:48:03.441038 kubelet[2157]: E0516 23:48:03.440984 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:03.442719 containerd[1448]: time="2025-05-16T23:48:03.442677222Z" level=info msg="CreateContainer within sandbox \"3977e9c66d6e60e0a36beadd07141b7f7da1b80ec85b840e506e3b47b3f670bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 23:48:03.446940 containerd[1448]: time="2025-05-16T23:48:03.446854440Z" level=info msg="CreateContainer within sandbox \"18063ee4097eb2641430b37e83b23a30be9c4e46770d4f8caba423e7fc5cd20d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9f2915ac503af254d03a51b9bfe8c826bdb73586943758452f869daf90fa8f8\"" May 16 23:48:03.447422 containerd[1448]: time="2025-05-16T23:48:03.447398152Z" level=info msg="StartContainer for \"c9f2915ac503af254d03a51b9bfe8c826bdb73586943758452f869daf90fa8f8\"" May 16 23:48:03.454982 containerd[1448]: time="2025-05-16T23:48:03.454932632Z" level=info msg="CreateContainer within sandbox \"3817eabfc77d360371c39447ac618a8e5ba3523f102489ae27e1be77664d5155\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee81f7430ea88bd3967a797d14c3f287beaa98f0d72f1568fd0ebb5d4ed5e8c5\"" May 16 23:48:03.456106 containerd[1448]: time="2025-05-16T23:48:03.456081915Z" level=info msg="StartContainer for \"ee81f7430ea88bd3967a797d14c3f287beaa98f0d72f1568fd0ebb5d4ed5e8c5\"" May 16 23:48:03.457196 containerd[1448]: time="2025-05-16T23:48:03.457137327Z" level=info msg="CreateContainer within sandbox \"3977e9c66d6e60e0a36beadd07141b7f7da1b80ec85b840e506e3b47b3f670bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"00f85271aeda33554126dace6634a08a9bc1b8aa22a42ae62a426cbc034c0e1b\"" May 16 23:48:03.457574 containerd[1448]: time="2025-05-16T23:48:03.457551512Z" level=info msg="StartContainer for \"00f85271aeda33554126dace6634a08a9bc1b8aa22a42ae62a426cbc034c0e1b\"" May 16 23:48:03.473986 systemd[1]: Started cri-containerd-c9f2915ac503af254d03a51b9bfe8c826bdb73586943758452f869daf90fa8f8.scope - libcontainer container c9f2915ac503af254d03a51b9bfe8c826bdb73586943758452f869daf90fa8f8. May 16 23:48:03.477442 systemd[1]: Started cri-containerd-ee81f7430ea88bd3967a797d14c3f287beaa98f0d72f1568fd0ebb5d4ed5e8c5.scope - libcontainer container ee81f7430ea88bd3967a797d14c3f287beaa98f0d72f1568fd0ebb5d4ed5e8c5. May 16 23:48:03.479277 kubelet[2157]: W0516 23:48:03.479138 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:03.479277 kubelet[2157]: E0516 23:48:03.479206 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:03.481493 systemd[1]: Started cri-containerd-00f85271aeda33554126dace6634a08a9bc1b8aa22a42ae62a426cbc034c0e1b.scope - libcontainer container 00f85271aeda33554126dace6634a08a9bc1b8aa22a42ae62a426cbc034c0e1b. May 16 23:48:03.524547 kubelet[2157]: W0516 23:48:03.524470 2157 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused May 16 23:48:03.524547 kubelet[2157]: E0516 23:48:03.524539 2157 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" May 16 23:48:03.541280 kubelet[2157]: E0516 23:48:03.541160 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" May 16 23:48:03.552352 containerd[1448]: time="2025-05-16T23:48:03.552140615Z" level=info msg="StartContainer for \"c9f2915ac503af254d03a51b9bfe8c826bdb73586943758452f869daf90fa8f8\" returns successfully" May 16 23:48:03.552352 containerd[1448]: time="2025-05-16T23:48:03.552314094Z" level=info msg="StartContainer for \"ee81f7430ea88bd3967a797d14c3f287beaa98f0d72f1568fd0ebb5d4ed5e8c5\" returns successfully" May 16 23:48:03.552352 containerd[1448]: time="2025-05-16T23:48:03.552340776Z" level=info msg="StartContainer for \"00f85271aeda33554126dace6634a08a9bc1b8aa22a42ae62a426cbc034c0e1b\" returns successfully" May 16 23:48:03.842666 kubelet[2157]: I0516 23:48:03.842555 2157 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 23:48:04.157962 kubelet[2157]: E0516 23:48:04.157667 2157 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 23:48:04.157962 kubelet[2157]: E0516 23:48:04.157778 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:04.164443 kubelet[2157]: E0516 23:48:04.164221 2157 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 23:48:04.164443 kubelet[2157]: E0516 23:48:04.164344 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:04.168118 kubelet[2157]: E0516 23:48:04.167973 2157 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 23:48:04.168118 kubelet[2157]: E0516 23:48:04.168075 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:05.130008 kubelet[2157]: I0516 23:48:05.129831 2157 apiserver.go:52] "Watching apiserver" May 16 23:48:05.136135 kubelet[2157]: I0516 23:48:05.136074 2157 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 23:48:05.148229 kubelet[2157]: I0516 23:48:05.148184 2157 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 23:48:05.167260 kubelet[2157]: I0516 23:48:05.167022 2157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 23:48:05.168232 kubelet[2157]: I0516 23:48:05.168078 2157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 23:48:05.182124 kubelet[2157]: E0516 23:48:05.182083 2157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 23:48:05.182339 kubelet[2157]: E0516 23:48:05.182120 2157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 23:48:05.182523 kubelet[2157]: E0516 23:48:05.182501 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:05.182629 kubelet[2157]: E0516 23:48:05.182612 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:05.236382 kubelet[2157]: I0516 23:48:05.236347 2157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 23:48:05.238388 kubelet[2157]: E0516 23:48:05.238357 2157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 23:48:05.238388 kubelet[2157]: I0516 23:48:05.238386 2157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 23:48:05.240198 kubelet[2157]: E0516 23:48:05.240161 2157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 23:48:05.240198 kubelet[2157]: I0516 23:48:05.240182 2157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 23:48:05.241783 kubelet[2157]: E0516 23:48:05.241760 2157 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 23:48:06.455571 kubelet[2157]: I0516 23:48:06.455519 2157 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 23:48:06.461045 kubelet[2157]: E0516 23:48:06.460870 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:07.072642 systemd[1]: Reloading requested from client PID 2438 ('systemctl') (unit session-7.scope)... May 16 23:48:07.072657 systemd[1]: Reloading... May 16 23:48:07.138828 zram_generator::config[2480]: No configuration found. May 16 23:48:07.169857 kubelet[2157]: E0516 23:48:07.169825 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:07.222999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 23:48:07.286845 systemd[1]: Reloading finished in 213 ms. May 16 23:48:07.315723 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:48:07.334874 systemd[1]: kubelet.service: Deactivated successfully. May 16 23:48:07.335114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:48:07.335172 systemd[1]: kubelet.service: Consumed 2.400s CPU time, 130.5M memory peak, 0B memory swap peak. May 16 23:48:07.346153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 23:48:07.446390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 23:48:07.450643 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 23:48:07.487543 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 23:48:07.487543 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 23:48:07.487543 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 23:48:07.487905 kubelet[2519]: I0516 23:48:07.487595 2519 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 23:48:07.493260 kubelet[2519]: I0516 23:48:07.493214 2519 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 23:48:07.493260 kubelet[2519]: I0516 23:48:07.493244 2519 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 23:48:07.495720 kubelet[2519]: I0516 23:48:07.495652 2519 server.go:954] "Client rotation is on, will bootstrap in background" May 16 23:48:07.496904 kubelet[2519]: I0516 23:48:07.496865 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 23:48:07.499093 kubelet[2519]: I0516 23:48:07.499070 2519 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 23:48:07.503324 kubelet[2519]: E0516 23:48:07.503275 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 23:48:07.503324 kubelet[2519]: I0516 23:48:07.503307 2519 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 23:48:07.505496 kubelet[2519]: I0516 23:48:07.505470 2519 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 23:48:07.505686 kubelet[2519]: I0516 23:48:07.505648 2519 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 23:48:07.505867 kubelet[2519]: I0516 23:48:07.505676 2519 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 23:48:07.505867 kubelet[2519]: I0516 23:48:07.505867 2519 topology_manager.go:138] "Creating topology manager with none policy" May 16 23:48:07.505972 kubelet[2519]: I0516 23:48:07.505876 2519 container_manager_linux.go:304] "Creating device plugin manager" May 16 23:48:07.505972 kubelet[2519]: I0516 23:48:07.505919 2519 state_mem.go:36] "Initialized new in-memory state store" May 16 23:48:07.506058 kubelet[2519]: I0516 23:48:07.506044 2519 kubelet.go:446] "Attempting to sync node with API server" May 16 23:48:07.506058 kubelet[2519]: I0516 23:48:07.506059 2519 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 23:48:07.506105 kubelet[2519]: I0516 23:48:07.506075 2519 kubelet.go:352] "Adding apiserver pod source" May 16 23:48:07.506105 kubelet[2519]: I0516 23:48:07.506084 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 23:48:07.510721 kubelet[2519]: I0516 23:48:07.507481 2519 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 23:48:07.510721 kubelet[2519]: I0516 23:48:07.507944 2519 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 23:48:07.517800 kubelet[2519]: I0516 23:48:07.514699 2519 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 23:48:07.517800 kubelet[2519]: I0516 23:48:07.514820 2519 server.go:1287] "Started kubelet" May 16 23:48:07.517800 kubelet[2519]: I0516 23:48:07.517049 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 23:48:07.520733 kubelet[2519]: I0516 23:48:07.520688 2519 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 23:48:07.521573 kubelet[2519]: I0516 23:48:07.521522 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 23:48:07.522018 kubelet[2519]: I0516 23:48:07.521998 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 23:48:07.522343 kubelet[2519]: I0516 23:48:07.522330 2519 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 23:48:07.522443 kubelet[2519]: E0516 23:48:07.522428 2519 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 23:48:07.525558 kubelet[2519]: I0516 23:48:07.522856 2519 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 23:48:07.525558 kubelet[2519]: I0516 23:48:07.522991 2519 reconciler.go:26] "Reconciler: start to sync state" May 16 23:48:07.525558 kubelet[2519]: I0516 23:48:07.523012 2519 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 23:48:07.525558 kubelet[2519]: I0516 23:48:07.524370 2519 server.go:479] "Adding debug handlers to kubelet server" May 16 23:48:07.534080 kubelet[2519]: I0516 23:48:07.532759 2519 factory.go:221] Registration of the systemd container factory successfully May 16 23:48:07.534080 kubelet[2519]: I0516 23:48:07.532901 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 23:48:07.542520 kubelet[2519]: I0516 23:48:07.542470 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 23:48:07.543610 kubelet[2519]: I0516 23:48:07.543585 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 23:48:07.543610 kubelet[2519]: I0516 23:48:07.543610 2519 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 23:48:07.543701 kubelet[2519]: I0516 23:48:07.543627 2519 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 23:48:07.543701 kubelet[2519]: I0516 23:48:07.543633 2519 kubelet.go:2382] "Starting kubelet main sync loop" May 16 23:48:07.543701 kubelet[2519]: E0516 23:48:07.543684 2519 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 23:48:07.550641 kubelet[2519]: I0516 23:48:07.549857 2519 factory.go:221] Registration of the containerd container factory successfully May 16 23:48:07.553879 kubelet[2519]: E0516 23:48:07.553644 2519 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 23:48:07.582162 kubelet[2519]: I0516 23:48:07.582131 2519 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 23:48:07.582162 kubelet[2519]: I0516 23:48:07.582155 2519 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 23:48:07.582162 kubelet[2519]: I0516 23:48:07.582177 2519 state_mem.go:36] "Initialized new in-memory state store" May 16 23:48:07.582363 kubelet[2519]: I0516 23:48:07.582330 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 23:48:07.582363 kubelet[2519]: I0516 23:48:07.582341 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 23:48:07.582363 kubelet[2519]: I0516 23:48:07.582358 2519 policy_none.go:49] "None policy: Start" May 16 23:48:07.582363 kubelet[2519]: I0516 23:48:07.582366 2519 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 23:48:07.582442 kubelet[2519]: I0516 23:48:07.582375 2519 state_mem.go:35] "Initializing new in-memory state store" May 16 23:48:07.582478 kubelet[2519]: I0516 23:48:07.582464 2519 state_mem.go:75] "Updated machine memory state" May 16 23:48:07.586377 kubelet[2519]: I0516 23:48:07.586306 2519 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 23:48:07.587577 kubelet[2519]: I0516 23:48:07.587540 2519 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 23:48:07.587577 kubelet[2519]: I0516 23:48:07.587558 2519 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 23:48:07.587842 kubelet[2519]: I0516 23:48:07.587766 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 23:48:07.590079 kubelet[2519]: E0516 23:48:07.590058 2519 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 23:48:07.645357 kubelet[2519]: I0516 23:48:07.645183 2519 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 23:48:07.645357 kubelet[2519]: I0516 23:48:07.645219 2519 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 23:48:07.645357 kubelet[2519]: I0516 23:48:07.645260 2519 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 23:48:07.678569 kubelet[2519]: E0516 23:48:07.678511 2519 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 23:48:07.693546 kubelet[2519]: I0516 23:48:07.693506 2519 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 23:48:07.723917 kubelet[2519]: I0516 23:48:07.723887 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:07.724037 kubelet[2519]: I0516 23:48:07.723926 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 23:48:07.724037 kubelet[2519]: I0516 23:48:07.723947 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90f445e3250a8e54529818b48469f760-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90f445e3250a8e54529818b48469f760\") " pod="kube-system/kube-apiserver-localhost" May 16 23:48:07.724037 kubelet[2519]: I0516 23:48:07.723965 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:07.724037 kubelet[2519]: I0516 23:48:07.723981 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:07.724037 kubelet[2519]: I0516 23:48:07.723995 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:07.724151 kubelet[2519]: I0516 23:48:07.724009 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 23:48:07.724151 kubelet[2519]: I0516 23:48:07.724023 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90f445e3250a8e54529818b48469f760-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90f445e3250a8e54529818b48469f760\") " pod="kube-system/kube-apiserver-localhost" May 16 23:48:07.724151 kubelet[2519]: I0516 23:48:07.724039 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90f445e3250a8e54529818b48469f760-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90f445e3250a8e54529818b48469f760\") " pod="kube-system/kube-apiserver-localhost" May 16 23:48:07.728678 kubelet[2519]: I0516 23:48:07.728350 2519 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 23:48:07.728678 kubelet[2519]: I0516 23:48:07.728484 2519 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 23:48:07.979623 kubelet[2519]: E0516 23:48:07.979507 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:07.979623 kubelet[2519]: E0516 23:48:07.979506 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:07.979623 kubelet[2519]: E0516 23:48:07.979513 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:08.077315 sudo[2555]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 23:48:08.077592 sudo[2555]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 23:48:08.502420 sudo[2555]: pam_unix(sudo:session): session closed for user root May 16 23:48:08.507572 kubelet[2519]: I0516 23:48:08.507317 2519 apiserver.go:52] "Watching apiserver" May 16 23:48:08.523315 kubelet[2519]: I0516 23:48:08.523234 2519 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 23:48:08.562896 kubelet[2519]: I0516 23:48:08.562833 2519 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 23:48:08.563033 kubelet[2519]: E0516 23:48:08.562932 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:08.563831 kubelet[2519]: I0516 23:48:08.563499 2519 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 23:48:08.569883 kubelet[2519]: E0516 23:48:08.569180 2519 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 23:48:08.570014 kubelet[2519]: E0516 23:48:08.569890 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:08.571562 kubelet[2519]: E0516 23:48:08.571535 2519 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 23:48:08.572591 kubelet[2519]: E0516 23:48:08.571678 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:08.586077 kubelet[2519]: I0516 23:48:08.585986 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5859710009999999 podStartE2EDuration="1.585971001s" podCreationTimestamp="2025-05-16 23:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:48:08.585962271 +0000 UTC m=+1.130597107" watchObservedRunningTime="2025-05-16 23:48:08.585971001 +0000 UTC m=+1.130605797" May 16 23:48:08.601751 kubelet[2519]: I0516 23:48:08.601693 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.601675685 podStartE2EDuration="2.601675685s" podCreationTimestamp="2025-05-16 23:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:48:08.593764655 +0000 UTC m=+1.138399451" watchObservedRunningTime="2025-05-16 23:48:08.601675685 +0000 UTC m=+1.146310481" May 16 23:48:08.613199 kubelet[2519]: I0516 23:48:08.612738 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.612720836 podStartE2EDuration="1.612720836s" podCreationTimestamp="2025-05-16 23:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:48:08.601876038 +0000 UTC m=+1.146510834" watchObservedRunningTime="2025-05-16 23:48:08.612720836 +0000 UTC m=+1.157355632" May 16 23:48:09.564628 kubelet[2519]: E0516 23:48:09.564595 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:09.565147 kubelet[2519]: E0516 23:48:09.564724 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:10.566637 kubelet[2519]: E0516 23:48:10.566609 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:10.598837 kubelet[2519]: E0516 23:48:10.598808 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:11.136123 sudo[1626]: pam_unix(sudo:session): session closed for user root May 16 23:48:11.137234 sshd[1625]: Connection closed by 10.0.0.1 port 36780 May 16 23:48:11.137893 sshd-session[1623]: pam_unix(sshd:session): session closed for user core May 16 23:48:11.142853 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:36780.service: Deactivated successfully. May 16 23:48:11.144485 systemd[1]: session-7.scope: Deactivated successfully. May 16 23:48:11.144651 systemd[1]: session-7.scope: Consumed 8.148s CPU time, 157.6M memory peak, 0B memory swap peak. May 16 23:48:11.145262 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. May 16 23:48:11.146234 systemd-logind[1426]: Removed session 7. May 16 23:48:13.480636 kubelet[2519]: E0516 23:48:13.480527 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:13.495554 kubelet[2519]: I0516 23:48:13.495518 2519 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 23:48:13.495953 containerd[1448]: time="2025-05-16T23:48:13.495862319Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 23:48:13.496810 kubelet[2519]: I0516 23:48:13.496330 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 23:48:13.571422 kubelet[2519]: E0516 23:48:13.571349 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:14.407047 systemd[1]: Created slice kubepods-besteffort-pod9af10dc2_9cd3_4910_9af3_fe5083c10178.slice - libcontainer container kubepods-besteffort-pod9af10dc2_9cd3_4910_9af3_fe5083c10178.slice. May 16 23:48:14.428886 systemd[1]: Created slice kubepods-burstable-pod0d6f061c_e01b_4913_8352_4775fa6bc524.slice - libcontainer container kubepods-burstable-pod0d6f061c_e01b_4913_8352_4775fa6bc524.slice. May 16 23:48:14.466731 kubelet[2519]: I0516 23:48:14.466688 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9af10dc2-9cd3-4910-9af3-fe5083c10178-lib-modules\") pod \"kube-proxy-grmv4\" (UID: \"9af10dc2-9cd3-4910-9af3-fe5083c10178\") " pod="kube-system/kube-proxy-grmv4" May 16 23:48:14.466951 kubelet[2519]: I0516 23:48:14.466867 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kz2c\" (UniqueName: \"kubernetes.io/projected/9af10dc2-9cd3-4910-9af3-fe5083c10178-kube-api-access-2kz2c\") pod \"kube-proxy-grmv4\" (UID: \"9af10dc2-9cd3-4910-9af3-fe5083c10178\") " pod="kube-system/kube-proxy-grmv4" May 16 23:48:14.467239 kubelet[2519]: I0516 23:48:14.467036 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-bpf-maps\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467336 kubelet[2519]: I0516 23:48:14.467302 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-lib-modules\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467807 kubelet[2519]: I0516 23:48:14.467427 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-xtables-lock\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467807 kubelet[2519]: I0516 23:48:14.467449 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-run\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467807 kubelet[2519]: I0516 23:48:14.467466 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssw47\" (UniqueName: \"kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-kube-api-access-ssw47\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467807 kubelet[2519]: I0516 23:48:14.467486 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9af10dc2-9cd3-4910-9af3-fe5083c10178-kube-proxy\") pod \"kube-proxy-grmv4\" (UID: \"9af10dc2-9cd3-4910-9af3-fe5083c10178\") " pod="kube-system/kube-proxy-grmv4" May 16 23:48:14.467807 kubelet[2519]: I0516 23:48:14.467501 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cni-path\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467807 kubelet[2519]: I0516 23:48:14.467515 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d6f061c-e01b-4913-8352-4775fa6bc524-clustermesh-secrets\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467964 kubelet[2519]: I0516 23:48:14.467551 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-config-path\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467964 kubelet[2519]: I0516 23:48:14.467566 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-net\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467964 kubelet[2519]: I0516 23:48:14.467580 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-hostproc\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467964 kubelet[2519]: I0516 23:48:14.467606 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-etc-cni-netd\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.467964 kubelet[2519]: I0516 23:48:14.467627 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9af10dc2-9cd3-4910-9af3-fe5083c10178-xtables-lock\") pod \"kube-proxy-grmv4\" (UID: \"9af10dc2-9cd3-4910-9af3-fe5083c10178\") " pod="kube-system/kube-proxy-grmv4" May 16 23:48:14.467964 kubelet[2519]: I0516 23:48:14.467644 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-cgroup\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.468084 kubelet[2519]: I0516 23:48:14.467659 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-kernel\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.468084 kubelet[2519]: I0516 23:48:14.467675 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-hubble-tls\") pod \"cilium-q8q8p\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " pod="kube-system/cilium-q8q8p" May 16 23:48:14.553272 systemd[1]: Created slice kubepods-besteffort-podb85121e4_7675_4d3c_bb3c_bca5f73e213c.slice - libcontainer container kubepods-besteffort-podb85121e4_7675_4d3c_bb3c_bca5f73e213c.slice. May 16 23:48:14.573817 kubelet[2519]: I0516 23:48:14.568493 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b85121e4-7675-4d3c-bb3c-bca5f73e213c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mnxzv\" (UID: \"b85121e4-7675-4d3c-bb3c-bca5f73e213c\") " pod="kube-system/cilium-operator-6c4d7847fc-mnxzv" May 16 23:48:14.573817 kubelet[2519]: I0516 23:48:14.570013 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdfz\" (UniqueName: \"kubernetes.io/projected/b85121e4-7675-4d3c-bb3c-bca5f73e213c-kube-api-access-hhdfz\") pod \"cilium-operator-6c4d7847fc-mnxzv\" (UID: \"b85121e4-7675-4d3c-bb3c-bca5f73e213c\") " pod="kube-system/cilium-operator-6c4d7847fc-mnxzv" May 16 23:48:14.721661 kubelet[2519]: E0516 23:48:14.721529 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:14.722524 containerd[1448]: time="2025-05-16T23:48:14.722350572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grmv4,Uid:9af10dc2-9cd3-4910-9af3-fe5083c10178,Namespace:kube-system,Attempt:0,}" May 16 23:48:14.731211 kubelet[2519]: E0516 23:48:14.731165 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:14.731592 containerd[1448]: time="2025-05-16T23:48:14.731558276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8q8p,Uid:0d6f061c-e01b-4913-8352-4775fa6bc524,Namespace:kube-system,Attempt:0,}" May 16 23:48:14.770585 containerd[1448]: time="2025-05-16T23:48:14.770466015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:14.770585 containerd[1448]: time="2025-05-16T23:48:14.770544877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:14.770585 containerd[1448]: time="2025-05-16T23:48:14.770560129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:14.772109 containerd[1448]: time="2025-05-16T23:48:14.771566643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:14.780910 containerd[1448]: time="2025-05-16T23:48:14.780372311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:14.780910 containerd[1448]: time="2025-05-16T23:48:14.780441566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:14.780910 containerd[1448]: time="2025-05-16T23:48:14.780456257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:14.780910 containerd[1448]: time="2025-05-16T23:48:14.780538962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:14.792998 systemd[1]: Started cri-containerd-c079f247045a8557f513eb29e57d8dd1a97c7eb98023f19e861da607d41f9c31.scope - libcontainer container c079f247045a8557f513eb29e57d8dd1a97c7eb98023f19e861da607d41f9c31. May 16 23:48:14.796922 systemd[1]: Started cri-containerd-105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda.scope - libcontainer container 105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda. May 16 23:48:14.821258 containerd[1448]: time="2025-05-16T23:48:14.821149765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grmv4,Uid:9af10dc2-9cd3-4910-9af3-fe5083c10178,Namespace:kube-system,Attempt:0,} returns sandbox id \"c079f247045a8557f513eb29e57d8dd1a97c7eb98023f19e861da607d41f9c31\"" May 16 23:48:14.822227 kubelet[2519]: E0516 23:48:14.822149 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:14.823653 containerd[1448]: time="2025-05-16T23:48:14.823556624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8q8p,Uid:0d6f061c-e01b-4913-8352-4775fa6bc524,Namespace:kube-system,Attempt:0,} returns sandbox id \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\"" May 16 23:48:14.824678 containerd[1448]: time="2025-05-16T23:48:14.824632713Z" level=info msg="CreateContainer within sandbox \"c079f247045a8557f513eb29e57d8dd1a97c7eb98023f19e861da607d41f9c31\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 23:48:14.824818 kubelet[2519]: E0516 23:48:14.824778 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:14.826457 containerd[1448]: time="2025-05-16T23:48:14.826226770Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 23:48:14.861397 kubelet[2519]: E0516 23:48:14.861356 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:14.862528 containerd[1448]: time="2025-05-16T23:48:14.861837908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mnxzv,Uid:b85121e4-7675-4d3c-bb3c-bca5f73e213c,Namespace:kube-system,Attempt:0,}" May 16 23:48:14.881120 containerd[1448]: time="2025-05-16T23:48:14.880968282Z" level=info msg="CreateContainer within sandbox \"c079f247045a8557f513eb29e57d8dd1a97c7eb98023f19e861da607d41f9c31\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"639ed34a2cf7434a91b23834b8ad31f77bd50ab6f4c7229053afd54ff107da76\"" May 16 23:48:14.881620 containerd[1448]: time="2025-05-16T23:48:14.881530845Z" level=info msg="StartContainer for \"639ed34a2cf7434a91b23834b8ad31f77bd50ab6f4c7229053afd54ff107da76\"" May 16 23:48:14.905953 systemd[1]: Started cri-containerd-639ed34a2cf7434a91b23834b8ad31f77bd50ab6f4c7229053afd54ff107da76.scope - libcontainer container 639ed34a2cf7434a91b23834b8ad31f77bd50ab6f4c7229053afd54ff107da76. May 16 23:48:14.910522 containerd[1448]: time="2025-05-16T23:48:14.910370680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:14.910522 containerd[1448]: time="2025-05-16T23:48:14.910419158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:14.910522 containerd[1448]: time="2025-05-16T23:48:14.910430447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:14.910522 containerd[1448]: time="2025-05-16T23:48:14.910495018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:14.932003 systemd[1]: Started cri-containerd-ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4.scope - libcontainer container ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4. May 16 23:48:14.950949 containerd[1448]: time="2025-05-16T23:48:14.950501063Z" level=info msg="StartContainer for \"639ed34a2cf7434a91b23834b8ad31f77bd50ab6f4c7229053afd54ff107da76\" returns successfully" May 16 23:48:14.972064 containerd[1448]: time="2025-05-16T23:48:14.971966319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mnxzv,Uid:b85121e4-7675-4d3c-bb3c-bca5f73e213c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4\"" May 16 23:48:14.973316 kubelet[2519]: E0516 23:48:14.973213 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:15.591974 kubelet[2519]: E0516 23:48:15.591945 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:19.217843 kubelet[2519]: E0516 23:48:19.217324 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:19.232445 kubelet[2519]: I0516 23:48:19.232375 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grmv4" podStartSLOduration=5.232360352 podStartE2EDuration="5.232360352s" podCreationTimestamp="2025-05-16 23:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:48:15.601260726 +0000 UTC m=+8.145895522" watchObservedRunningTime="2025-05-16 23:48:19.232360352 +0000 UTC m=+11.776995148" May 16 23:48:19.585835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332916736.mount: Deactivated successfully. May 16 23:48:19.599477 kubelet[2519]: E0516 23:48:19.599445 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:20.611852 kubelet[2519]: E0516 23:48:20.611822 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:22.118903 update_engine[1428]: I20250516 23:48:22.118836 1428 update_attempter.cc:509] Updating boot flags... May 16 23:48:22.186380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2916) May 16 23:48:22.270085 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2920) May 16 23:48:25.693351 containerd[1448]: time="2025-05-16T23:48:25.693296470Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:48:25.694278 containerd[1448]: time="2025-05-16T23:48:25.694022792Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 16 23:48:25.694884 containerd[1448]: time="2025-05-16T23:48:25.694845071Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:48:25.697162 containerd[1448]: time="2025-05-16T23:48:25.696996986Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.870531351s" May 16 23:48:25.697162 containerd[1448]: time="2025-05-16T23:48:25.697038122Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 23:48:25.700170 containerd[1448]: time="2025-05-16T23:48:25.700076581Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 23:48:25.700706 containerd[1448]: time="2025-05-16T23:48:25.700669652Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 23:48:25.741574 containerd[1448]: time="2025-05-16T23:48:25.741407262Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\"" May 16 23:48:25.743575 containerd[1448]: time="2025-05-16T23:48:25.742857505Z" level=info msg="StartContainer for \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\"" May 16 23:48:25.769995 systemd[1]: Started cri-containerd-c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4.scope - libcontainer container c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4. May 16 23:48:25.793553 containerd[1448]: time="2025-05-16T23:48:25.793508682Z" level=info msg="StartContainer for \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\" returns successfully" May 16 23:48:25.854860 systemd[1]: cri-containerd-c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4.scope: Deactivated successfully. May 16 23:48:25.986571 containerd[1448]: time="2025-05-16T23:48:25.980754551Z" level=info msg="shim disconnected" id=c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4 namespace=k8s.io May 16 23:48:25.986571 containerd[1448]: time="2025-05-16T23:48:25.986470209Z" level=warning msg="cleaning up after shim disconnected" id=c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4 namespace=k8s.io May 16 23:48:25.986571 containerd[1448]: time="2025-05-16T23:48:25.986483814Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:48:26.611537 kubelet[2519]: E0516 23:48:26.611507 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:26.613700 containerd[1448]: time="2025-05-16T23:48:26.613646353Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 23:48:26.637194 containerd[1448]: time="2025-05-16T23:48:26.637151745Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\"" May 16 23:48:26.638265 containerd[1448]: time="2025-05-16T23:48:26.638207689Z" level=info msg="StartContainer for \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\"" May 16 23:48:26.669992 systemd[1]: Started cri-containerd-f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e.scope - libcontainer container f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e. May 16 23:48:26.692075 containerd[1448]: time="2025-05-16T23:48:26.692021229Z" level=info msg="StartContainer for \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\" returns successfully" May 16 23:48:26.710027 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 23:48:26.710269 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 23:48:26.710343 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 23:48:26.721178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 23:48:26.721358 systemd[1]: cri-containerd-f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e.scope: Deactivated successfully. May 16 23:48:26.732862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 23:48:26.736567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4-rootfs.mount: Deactivated successfully. May 16 23:48:26.744500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e-rootfs.mount: Deactivated successfully. May 16 23:48:26.749706 containerd[1448]: time="2025-05-16T23:48:26.749642515Z" level=info msg="shim disconnected" id=f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e namespace=k8s.io May 16 23:48:26.749706 containerd[1448]: time="2025-05-16T23:48:26.749697495Z" level=warning msg="cleaning up after shim disconnected" id=f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e namespace=k8s.io May 16 23:48:26.749706 containerd[1448]: time="2025-05-16T23:48:26.749706778Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:48:26.999657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757542098.mount: Deactivated successfully. May 16 23:48:27.428625 containerd[1448]: time="2025-05-16T23:48:27.428579122Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:48:27.429739 containerd[1448]: time="2025-05-16T23:48:27.429697904Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 16 23:48:27.430589 containerd[1448]: time="2025-05-16T23:48:27.430563879Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 23:48:27.432668 containerd[1448]: time="2025-05-16T23:48:27.432100003Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.731985727s" May 16 23:48:27.432668 containerd[1448]: time="2025-05-16T23:48:27.432146659Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 23:48:27.434490 containerd[1448]: time="2025-05-16T23:48:27.434461449Z" level=info msg="CreateContainer within sandbox \"ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 23:48:27.446027 containerd[1448]: time="2025-05-16T23:48:27.445983899Z" level=info msg="CreateContainer within sandbox \"ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\"" May 16 23:48:27.447195 containerd[1448]: time="2025-05-16T23:48:27.446399401Z" level=info msg="StartContainer for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\"" May 16 23:48:27.474964 systemd[1]: Started cri-containerd-caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8.scope - libcontainer container caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8. May 16 23:48:27.497460 containerd[1448]: time="2025-05-16T23:48:27.497412242Z" level=info msg="StartContainer for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" returns successfully" May 16 23:48:27.616630 kubelet[2519]: E0516 23:48:27.616572 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:27.631913 kubelet[2519]: E0516 23:48:27.625263 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:27.636667 containerd[1448]: time="2025-05-16T23:48:27.636433624Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 23:48:27.674615 kubelet[2519]: I0516 23:48:27.674549 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mnxzv" podStartSLOduration=1.21591019 podStartE2EDuration="13.674533501s" podCreationTimestamp="2025-05-16 23:48:14 +0000 UTC" firstStartedPulling="2025-05-16 23:48:14.974149842 +0000 UTC m=+7.518784638" lastFinishedPulling="2025-05-16 23:48:27.432773153 +0000 UTC m=+19.977407949" observedRunningTime="2025-05-16 23:48:27.650930369 +0000 UTC m=+20.195565165" watchObservedRunningTime="2025-05-16 23:48:27.674533501 +0000 UTC m=+20.219168297" May 16 23:48:27.705986 containerd[1448]: time="2025-05-16T23:48:27.705872551Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\"" May 16 23:48:27.706566 containerd[1448]: time="2025-05-16T23:48:27.706527094Z" level=info msg="StartContainer for \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\"" May 16 23:48:27.748945 systemd[1]: Started cri-containerd-5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02.scope - libcontainer container 5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02. May 16 23:48:27.776048 containerd[1448]: time="2025-05-16T23:48:27.776007555Z" level=info msg="StartContainer for \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\" returns successfully" May 16 23:48:27.787575 systemd[1]: cri-containerd-5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02.scope: Deactivated successfully. May 16 23:48:27.813770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02-rootfs.mount: Deactivated successfully. May 16 23:48:27.890244 containerd[1448]: time="2025-05-16T23:48:27.890175019Z" level=info msg="shim disconnected" id=5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02 namespace=k8s.io May 16 23:48:27.890244 containerd[1448]: time="2025-05-16T23:48:27.890232039Z" level=warning msg="cleaning up after shim disconnected" id=5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02 namespace=k8s.io May 16 23:48:27.890244 containerd[1448]: time="2025-05-16T23:48:27.890241922Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:48:28.627494 kubelet[2519]: E0516 23:48:28.627128 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:28.627494 kubelet[2519]: E0516 23:48:28.627248 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:28.631727 containerd[1448]: time="2025-05-16T23:48:28.631678151Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 23:48:28.674168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538528579.mount: Deactivated successfully. May 16 23:48:28.678028 containerd[1448]: time="2025-05-16T23:48:28.677977917Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\"" May 16 23:48:28.682125 containerd[1448]: time="2025-05-16T23:48:28.682077308Z" level=info msg="StartContainer for \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\"" May 16 23:48:28.708954 systemd[1]: Started cri-containerd-1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2.scope - libcontainer container 1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2. May 16 23:48:28.729728 systemd[1]: cri-containerd-1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2.scope: Deactivated successfully. May 16 23:48:28.732466 containerd[1448]: time="2025-05-16T23:48:28.732288846Z" level=info msg="StartContainer for \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\" returns successfully" May 16 23:48:28.749241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2-rootfs.mount: Deactivated successfully. May 16 23:48:28.755403 containerd[1448]: time="2025-05-16T23:48:28.755345940Z" level=info msg="shim disconnected" id=1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2 namespace=k8s.io May 16 23:48:28.755403 containerd[1448]: time="2025-05-16T23:48:28.755398797Z" level=warning msg="cleaning up after shim disconnected" id=1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2 namespace=k8s.io May 16 23:48:28.755403 containerd[1448]: time="2025-05-16T23:48:28.755407359Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:48:28.764599 containerd[1448]: time="2025-05-16T23:48:28.764554205Z" level=warning msg="cleanup warnings time=\"2025-05-16T23:48:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 23:48:29.631780 kubelet[2519]: E0516 23:48:29.631749 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:29.634455 containerd[1448]: time="2025-05-16T23:48:29.633674186Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 23:48:29.648097 containerd[1448]: time="2025-05-16T23:48:29.648047136Z" level=info msg="CreateContainer within sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\"" May 16 23:48:29.648665 containerd[1448]: time="2025-05-16T23:48:29.648636672Z" level=info msg="StartContainer for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\"" May 16 23:48:29.679584 systemd[1]: Started cri-containerd-97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732.scope - libcontainer container 97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732. May 16 23:48:29.705827 containerd[1448]: time="2025-05-16T23:48:29.705779765Z" level=info msg="StartContainer for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" returns successfully" May 16 23:48:29.736616 systemd[1]: run-containerd-runc-k8s.io-97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732-runc.6on3A9.mount: Deactivated successfully. May 16 23:48:29.835981 kubelet[2519]: I0516 23:48:29.835380 2519 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 23:48:29.878473 systemd[1]: Created slice kubepods-burstable-podba710571_70b7_4e8a_82a0_9f7f788ee118.slice - libcontainer container kubepods-burstable-podba710571_70b7_4e8a_82a0_9f7f788ee118.slice. May 16 23:48:29.890257 systemd[1]: Created slice kubepods-burstable-poda7cbe2dd_5e90_4d99_9ff0_35f817b3f41c.slice - libcontainer container kubepods-burstable-poda7cbe2dd_5e90_4d99_9ff0_35f817b3f41c.slice. May 16 23:48:29.979006 kubelet[2519]: I0516 23:48:29.978954 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7cbe2dd-5e90-4d99-9ff0-35f817b3f41c-config-volume\") pod \"coredns-668d6bf9bc-54tj8\" (UID: \"a7cbe2dd-5e90-4d99-9ff0-35f817b3f41c\") " pod="kube-system/coredns-668d6bf9bc-54tj8" May 16 23:48:29.979006 kubelet[2519]: I0516 23:48:29.979000 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdz5p\" (UniqueName: \"kubernetes.io/projected/a7cbe2dd-5e90-4d99-9ff0-35f817b3f41c-kube-api-access-cdz5p\") pod \"coredns-668d6bf9bc-54tj8\" (UID: \"a7cbe2dd-5e90-4d99-9ff0-35f817b3f41c\") " pod="kube-system/coredns-668d6bf9bc-54tj8" May 16 23:48:29.979006 kubelet[2519]: I0516 23:48:29.979023 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba710571-70b7-4e8a-82a0-9f7f788ee118-config-volume\") pod \"coredns-668d6bf9bc-bfjwd\" (UID: \"ba710571-70b7-4e8a-82a0-9f7f788ee118\") " pod="kube-system/coredns-668d6bf9bc-bfjwd" May 16 23:48:29.982813 kubelet[2519]: I0516 23:48:29.979916 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nq7j\" (UniqueName: \"kubernetes.io/projected/ba710571-70b7-4e8a-82a0-9f7f788ee118-kube-api-access-6nq7j\") pod \"coredns-668d6bf9bc-bfjwd\" (UID: \"ba710571-70b7-4e8a-82a0-9f7f788ee118\") " pod="kube-system/coredns-668d6bf9bc-bfjwd" May 16 23:48:30.182564 kubelet[2519]: E0516 23:48:30.182372 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:30.183991 containerd[1448]: time="2025-05-16T23:48:30.183949597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfjwd,Uid:ba710571-70b7-4e8a-82a0-9f7f788ee118,Namespace:kube-system,Attempt:0,}" May 16 23:48:30.197990 kubelet[2519]: E0516 23:48:30.197946 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:30.199916 containerd[1448]: time="2025-05-16T23:48:30.199860910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54tj8,Uid:a7cbe2dd-5e90-4d99-9ff0-35f817b3f41c,Namespace:kube-system,Attempt:0,}" May 16 23:48:30.636459 kubelet[2519]: E0516 23:48:30.636151 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:30.656894 kubelet[2519]: I0516 23:48:30.656822 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q8q8p" podStartSLOduration=5.783033873 podStartE2EDuration="16.656785225s" podCreationTimestamp="2025-05-16 23:48:14 +0000 UTC" firstStartedPulling="2025-05-16 23:48:14.825680699 +0000 UTC m=+7.370315455" lastFinishedPulling="2025-05-16 23:48:25.699432011 +0000 UTC m=+18.244066807" observedRunningTime="2025-05-16 23:48:30.656562203 +0000 UTC m=+23.201196999" watchObservedRunningTime="2025-05-16 23:48:30.656785225 +0000 UTC m=+23.201420021" May 16 23:48:31.638135 kubelet[2519]: E0516 23:48:31.638106 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:31.908866 systemd-networkd[1386]: cilium_host: Link UP May 16 23:48:31.910595 systemd-networkd[1386]: cilium_net: Link UP May 16 23:48:31.910603 systemd-networkd[1386]: cilium_net: Gained carrier May 16 23:48:31.910854 systemd-networkd[1386]: cilium_host: Gained carrier May 16 23:48:32.000751 systemd-networkd[1386]: cilium_vxlan: Link UP May 16 23:48:32.000759 systemd-networkd[1386]: cilium_vxlan: Gained carrier May 16 23:48:32.198301 systemd-networkd[1386]: cilium_net: Gained IPv6LL May 16 23:48:32.270047 systemd-networkd[1386]: cilium_host: Gained IPv6LL May 16 23:48:32.307830 kernel: NET: Registered PF_ALG protocol family May 16 23:48:32.640492 kubelet[2519]: E0516 23:48:32.640124 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:32.909203 systemd-networkd[1386]: lxc_health: Link UP May 16 23:48:32.917157 systemd-networkd[1386]: lxc_health: Gained carrier May 16 23:48:33.334065 systemd-networkd[1386]: lxc1776dc656214: Link UP May 16 23:48:33.353904 kernel: eth0: renamed from tmp9d4f1 May 16 23:48:33.359092 systemd-networkd[1386]: tmp10de5: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 23:48:33.359158 systemd-networkd[1386]: tmp10de5: Cannot enable IPv6, ignoring: No such file or directory May 16 23:48:33.359186 systemd-networkd[1386]: tmp10de5: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory May 16 23:48:33.359196 systemd-networkd[1386]: tmp10de5: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory May 16 23:48:33.359205 systemd-networkd[1386]: tmp10de5: Cannot set IPv6 proxy NDP, ignoring: No such file or directory May 16 23:48:33.359218 systemd-networkd[1386]: tmp10de5: Cannot enable promote_secondaries for interface, ignoring: No such file or directory May 16 23:48:33.360869 kernel: eth0: renamed from tmp10de5 May 16 23:48:33.370026 systemd-networkd[1386]: lxc51497c5cdee8: Link UP May 16 23:48:33.370471 systemd-networkd[1386]: lxc1776dc656214: Gained carrier May 16 23:48:33.371721 systemd-networkd[1386]: lxc51497c5cdee8: Gained carrier May 16 23:48:33.950031 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL May 16 23:48:34.462038 systemd-networkd[1386]: lxc_health: Gained IPv6LL May 16 23:48:34.746345 kubelet[2519]: E0516 23:48:34.743135 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:34.910893 systemd-networkd[1386]: lxc1776dc656214: Gained IPv6LL May 16 23:48:35.095844 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:42352.service - OpenSSH per-connection server daemon (10.0.0.1:42352). May 16 23:48:35.144840 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 42352 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:35.146055 sshd-session[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:35.150480 systemd-logind[1426]: New session 8 of user core. May 16 23:48:35.156984 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 23:48:35.302519 sshd[3766]: Connection closed by 10.0.0.1 port 42352 May 16 23:48:35.303298 sshd-session[3764]: pam_unix(sshd:session): session closed for user core May 16 23:48:35.305866 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:42352.service: Deactivated successfully. May 16 23:48:35.308255 systemd[1]: session-8.scope: Deactivated successfully. May 16 23:48:35.310384 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. May 16 23:48:35.312523 systemd-logind[1426]: Removed session 8. May 16 23:48:35.357976 systemd-networkd[1386]: lxc51497c5cdee8: Gained IPv6LL May 16 23:48:35.646975 kubelet[2519]: E0516 23:48:35.646154 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:36.647743 kubelet[2519]: E0516 23:48:36.647696 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:36.927861 containerd[1448]: time="2025-05-16T23:48:36.927474325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:36.927861 containerd[1448]: time="2025-05-16T23:48:36.927536057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:36.927861 containerd[1448]: time="2025-05-16T23:48:36.927548219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:36.927861 containerd[1448]: time="2025-05-16T23:48:36.927627354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:36.942139 containerd[1448]: time="2025-05-16T23:48:36.940741377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:48:36.942139 containerd[1448]: time="2025-05-16T23:48:36.940820752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:48:36.942139 containerd[1448]: time="2025-05-16T23:48:36.940836435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:36.942139 containerd[1448]: time="2025-05-16T23:48:36.940924252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:48:36.943600 systemd[1]: run-containerd-runc-k8s.io-9d4f1107a78e4745e855449e1310c267b2073f6491cb3af809b5b1d820aaff46-runc.AJEyXH.mount: Deactivated successfully. May 16 23:48:36.958018 systemd[1]: Started cri-containerd-9d4f1107a78e4745e855449e1310c267b2073f6491cb3af809b5b1d820aaff46.scope - libcontainer container 9d4f1107a78e4745e855449e1310c267b2073f6491cb3af809b5b1d820aaff46. May 16 23:48:36.960386 systemd[1]: Started cri-containerd-10de556e813dfa5edbfbc2d8b3b49b00bc7b84e10ea7dc417c849682ee078966.scope - libcontainer container 10de556e813dfa5edbfbc2d8b3b49b00bc7b84e10ea7dc417c849682ee078966. May 16 23:48:36.970666 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 23:48:36.972103 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 23:48:36.990276 containerd[1448]: time="2025-05-16T23:48:36.990235904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfjwd,Uid:ba710571-70b7-4e8a-82a0-9f7f788ee118,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d4f1107a78e4745e855449e1310c267b2073f6491cb3af809b5b1d820aaff46\"" May 16 23:48:36.991093 kubelet[2519]: E0516 23:48:36.990899 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:36.993602 containerd[1448]: time="2025-05-16T23:48:36.993567019Z" level=info msg="CreateContainer within sandbox \"9d4f1107a78e4745e855449e1310c267b2073f6491cb3af809b5b1d820aaff46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 23:48:36.994319 containerd[1448]: time="2025-05-16T23:48:36.994291478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54tj8,Uid:a7cbe2dd-5e90-4d99-9ff0-35f817b3f41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"10de556e813dfa5edbfbc2d8b3b49b00bc7b84e10ea7dc417c849682ee078966\"" May 16 23:48:36.995180 kubelet[2519]: E0516 23:48:36.995138 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:36.997420 containerd[1448]: time="2025-05-16T23:48:36.997387229Z" level=info msg="CreateContainer within sandbox \"10de556e813dfa5edbfbc2d8b3b49b00bc7b84e10ea7dc417c849682ee078966\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 23:48:37.007153 containerd[1448]: time="2025-05-16T23:48:37.007110769Z" level=info msg="CreateContainer within sandbox \"9d4f1107a78e4745e855449e1310c267b2073f6491cb3af809b5b1d820aaff46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3076d9bf3400825d54530b4cfc4c895f999d1ffdb2f1c76e5bda4acc65ab18f2\"" May 16 23:48:37.007676 containerd[1448]: time="2025-05-16T23:48:37.007652826Z" level=info msg="StartContainer for \"3076d9bf3400825d54530b4cfc4c895f999d1ffdb2f1c76e5bda4acc65ab18f2\"" May 16 23:48:37.017977 containerd[1448]: time="2025-05-16T23:48:37.017928865Z" level=info msg="CreateContainer within sandbox \"10de556e813dfa5edbfbc2d8b3b49b00bc7b84e10ea7dc417c849682ee078966\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"394570223fd7b9c9f1b614aec9c282c315f8c142909ec218e1b26a98963412f5\"" May 16 23:48:37.020284 containerd[1448]: time="2025-05-16T23:48:37.020260082Z" level=info msg="StartContainer for \"394570223fd7b9c9f1b614aec9c282c315f8c142909ec218e1b26a98963412f5\"" May 16 23:48:37.034979 systemd[1]: Started cri-containerd-3076d9bf3400825d54530b4cfc4c895f999d1ffdb2f1c76e5bda4acc65ab18f2.scope - libcontainer container 3076d9bf3400825d54530b4cfc4c895f999d1ffdb2f1c76e5bda4acc65ab18f2. May 16 23:48:37.039826 systemd[1]: Started cri-containerd-394570223fd7b9c9f1b614aec9c282c315f8c142909ec218e1b26a98963412f5.scope - libcontainer container 394570223fd7b9c9f1b614aec9c282c315f8c142909ec218e1b26a98963412f5. May 16 23:48:37.063432 containerd[1448]: time="2025-05-16T23:48:37.063389879Z" level=info msg="StartContainer for \"3076d9bf3400825d54530b4cfc4c895f999d1ffdb2f1c76e5bda4acc65ab18f2\" returns successfully" May 16 23:48:37.072304 containerd[1448]: time="2025-05-16T23:48:37.072266107Z" level=info msg="StartContainer for \"394570223fd7b9c9f1b614aec9c282c315f8c142909ec218e1b26a98963412f5\" returns successfully" May 16 23:48:37.652395 kubelet[2519]: E0516 23:48:37.652017 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:37.658362 kubelet[2519]: E0516 23:48:37.657277 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:37.680335 kubelet[2519]: I0516 23:48:37.679478 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-54tj8" podStartSLOduration=23.679461153 podStartE2EDuration="23.679461153s" podCreationTimestamp="2025-05-16 23:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:48:37.667898124 +0000 UTC m=+30.212532920" watchObservedRunningTime="2025-05-16 23:48:37.679461153 +0000 UTC m=+30.224095949" May 16 23:48:37.695328 kubelet[2519]: I0516 23:48:37.695242 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bfjwd" podStartSLOduration=23.695224333 podStartE2EDuration="23.695224333s" podCreationTimestamp="2025-05-16 23:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:48:37.681173059 +0000 UTC m=+30.225807855" watchObservedRunningTime="2025-05-16 23:48:37.695224333 +0000 UTC m=+30.239859129" May 16 23:48:38.661332 kubelet[2519]: E0516 23:48:38.660950 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:38.661332 kubelet[2519]: E0516 23:48:38.661024 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:39.662650 kubelet[2519]: E0516 23:48:39.662596 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:39.663058 kubelet[2519]: E0516 23:48:39.662669 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:48:40.317088 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). May 16 23:48:40.365877 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:40.367335 sshd-session[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:40.372144 systemd-logind[1426]: New session 9 of user core. May 16 23:48:40.380023 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 23:48:40.526488 sshd[3957]: Connection closed by 10.0.0.1 port 42364 May 16 23:48:40.527071 sshd-session[3955]: pam_unix(sshd:session): session closed for user core May 16 23:48:40.530924 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:42364.service: Deactivated successfully. May 16 23:48:40.535150 systemd[1]: session-9.scope: Deactivated successfully. May 16 23:48:40.537091 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. May 16 23:48:40.538318 systemd-logind[1426]: Removed session 9. May 16 23:48:45.543735 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:39410.service - OpenSSH per-connection server daemon (10.0.0.1:39410). May 16 23:48:45.586753 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 39410 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:45.588191 sshd-session[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:45.592030 systemd-logind[1426]: New session 10 of user core. May 16 23:48:45.603010 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 23:48:45.724864 sshd[3975]: Connection closed by 10.0.0.1 port 39410 May 16 23:48:45.725232 sshd-session[3973]: pam_unix(sshd:session): session closed for user core May 16 23:48:45.729999 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:39410.service: Deactivated successfully. May 16 23:48:45.733712 systemd[1]: session-10.scope: Deactivated successfully. May 16 23:48:45.734655 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. May 16 23:48:45.735557 systemd-logind[1426]: Removed session 10. May 16 23:48:50.739567 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:39418.service - OpenSSH per-connection server daemon (10.0.0.1:39418). May 16 23:48:50.784774 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 39418 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:50.786208 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:50.792034 systemd-logind[1426]: New session 11 of user core. May 16 23:48:50.802043 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 23:48:50.927987 sshd[3990]: Connection closed by 10.0.0.1 port 39418 May 16 23:48:50.928810 sshd-session[3988]: pam_unix(sshd:session): session closed for user core May 16 23:48:50.935309 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:39418.service: Deactivated successfully. May 16 23:48:50.938521 systemd[1]: session-11.scope: Deactivated successfully. May 16 23:48:50.940704 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. May 16 23:48:50.951898 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:39424.service - OpenSSH per-connection server daemon (10.0.0.1:39424). May 16 23:48:50.953990 systemd-logind[1426]: Removed session 11. May 16 23:48:50.996326 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 39424 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:50.997644 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:51.002061 systemd-logind[1426]: New session 12 of user core. May 16 23:48:51.011991 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 23:48:51.170495 sshd[4006]: Connection closed by 10.0.0.1 port 39424 May 16 23:48:51.171549 sshd-session[4004]: pam_unix(sshd:session): session closed for user core May 16 23:48:51.182912 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:39424.service: Deactivated successfully. May 16 23:48:51.187593 systemd[1]: session-12.scope: Deactivated successfully. May 16 23:48:51.191244 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. May 16 23:48:51.206152 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:39434.service - OpenSSH per-connection server daemon (10.0.0.1:39434). May 16 23:48:51.211120 systemd-logind[1426]: Removed session 12. May 16 23:48:51.250782 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 39434 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:51.252052 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:51.256847 systemd-logind[1426]: New session 13 of user core. May 16 23:48:51.274966 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 23:48:51.388818 sshd[4018]: Connection closed by 10.0.0.1 port 39434 May 16 23:48:51.389219 sshd-session[4016]: pam_unix(sshd:session): session closed for user core May 16 23:48:51.392688 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:39434.service: Deactivated successfully. May 16 23:48:51.394276 systemd[1]: session-13.scope: Deactivated successfully. May 16 23:48:51.394936 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. May 16 23:48:51.395666 systemd-logind[1426]: Removed session 13. May 16 23:48:56.408203 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:50256.service - OpenSSH per-connection server daemon (10.0.0.1:50256). May 16 23:48:56.447661 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 50256 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:48:56.448606 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:48:56.452298 systemd-logind[1426]: New session 14 of user core. May 16 23:48:56.461986 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 23:48:56.585837 sshd[4033]: Connection closed by 10.0.0.1 port 50256 May 16 23:48:56.586029 sshd-session[4031]: pam_unix(sshd:session): session closed for user core May 16 23:48:56.589056 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:50256.service: Deactivated successfully. May 16 23:48:56.592359 systemd[1]: session-14.scope: Deactivated successfully. May 16 23:48:56.592960 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. May 16 23:48:56.593702 systemd-logind[1426]: Removed session 14. May 16 23:49:01.597431 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:50270.service - OpenSSH per-connection server daemon (10.0.0.1:50270). May 16 23:49:01.653041 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 50270 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:01.654270 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:01.657778 systemd-logind[1426]: New session 15 of user core. May 16 23:49:01.665942 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 23:49:01.800283 sshd[4047]: Connection closed by 10.0.0.1 port 50270 May 16 23:49:01.800886 sshd-session[4045]: pam_unix(sshd:session): session closed for user core May 16 23:49:01.809176 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:50270.service: Deactivated successfully. May 16 23:49:01.812158 systemd[1]: session-15.scope: Deactivated successfully. May 16 23:49:01.813712 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. May 16 23:49:01.821069 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:50274.service - OpenSSH per-connection server daemon (10.0.0.1:50274). May 16 23:49:01.822463 systemd-logind[1426]: Removed session 15. May 16 23:49:01.858822 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 50274 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:01.860426 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:01.864047 systemd-logind[1426]: New session 16 of user core. May 16 23:49:01.873955 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 23:49:02.086272 sshd[4061]: Connection closed by 10.0.0.1 port 50274 May 16 23:49:02.086857 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 16 23:49:02.099438 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:50274.service: Deactivated successfully. May 16 23:49:02.100963 systemd[1]: session-16.scope: Deactivated successfully. May 16 23:49:02.102542 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. May 16 23:49:02.111061 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:50276.service - OpenSSH per-connection server daemon (10.0.0.1:50276). May 16 23:49:02.112223 systemd-logind[1426]: Removed session 16. May 16 23:49:02.153088 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 50276 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:02.154442 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:02.158058 systemd-logind[1426]: New session 17 of user core. May 16 23:49:02.172975 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 23:49:02.798350 sshd[4074]: Connection closed by 10.0.0.1 port 50276 May 16 23:49:02.798177 sshd-session[4072]: pam_unix(sshd:session): session closed for user core May 16 23:49:02.806352 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:50276.service: Deactivated successfully. May 16 23:49:02.810385 systemd[1]: session-17.scope: Deactivated successfully. May 16 23:49:02.812158 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. May 16 23:49:02.813338 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:56500.service - OpenSSH per-connection server daemon (10.0.0.1:56500). May 16 23:49:02.815732 systemd-logind[1426]: Removed session 17. May 16 23:49:02.864668 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 56500 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:02.866129 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:02.869936 systemd-logind[1426]: New session 18 of user core. May 16 23:49:02.875941 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 23:49:03.108024 sshd[4095]: Connection closed by 10.0.0.1 port 56500 May 16 23:49:03.109379 sshd-session[4093]: pam_unix(sshd:session): session closed for user core May 16 23:49:03.117867 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:56500.service: Deactivated successfully. May 16 23:49:03.120078 systemd[1]: session-18.scope: Deactivated successfully. May 16 23:49:03.121785 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. May 16 23:49:03.134137 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:56508.service - OpenSSH per-connection server daemon (10.0.0.1:56508). May 16 23:49:03.135555 systemd-logind[1426]: Removed session 18. May 16 23:49:03.169709 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 56508 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:03.171105 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:03.177916 systemd-logind[1426]: New session 19 of user core. May 16 23:49:03.187967 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 23:49:03.301661 sshd[4108]: Connection closed by 10.0.0.1 port 56508 May 16 23:49:03.302405 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 16 23:49:03.306215 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:56508.service: Deactivated successfully. May 16 23:49:03.308068 systemd[1]: session-19.scope: Deactivated successfully. May 16 23:49:03.308726 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. May 16 23:49:03.309551 systemd-logind[1426]: Removed session 19. May 16 23:49:08.312460 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:56520.service - OpenSSH per-connection server daemon (10.0.0.1:56520). May 16 23:49:08.352006 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 56520 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:08.353198 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:08.357614 systemd-logind[1426]: New session 20 of user core. May 16 23:49:08.368189 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 23:49:08.479826 sshd[4126]: Connection closed by 10.0.0.1 port 56520 May 16 23:49:08.480101 sshd-session[4124]: pam_unix(sshd:session): session closed for user core May 16 23:49:08.483343 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:56520.service: Deactivated successfully. May 16 23:49:08.485357 systemd[1]: session-20.scope: Deactivated successfully. May 16 23:49:08.486264 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. May 16 23:49:08.487365 systemd-logind[1426]: Removed session 20. May 16 23:49:13.494639 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:57388.service - OpenSSH per-connection server daemon (10.0.0.1:57388). May 16 23:49:13.541802 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 57388 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:13.543145 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:13.548681 systemd-logind[1426]: New session 21 of user core. May 16 23:49:13.559233 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 23:49:13.673544 sshd[4140]: Connection closed by 10.0.0.1 port 57388 May 16 23:49:13.673900 sshd-session[4138]: pam_unix(sshd:session): session closed for user core May 16 23:49:13.677023 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:57388.service: Deactivated successfully. May 16 23:49:13.678838 systemd[1]: session-21.scope: Deactivated successfully. May 16 23:49:13.679409 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. May 16 23:49:13.680251 systemd-logind[1426]: Removed session 21. May 16 23:49:18.685591 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:57422.service - OpenSSH per-connection server daemon (10.0.0.1:57422). May 16 23:49:18.731325 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 57422 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:18.732728 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:18.737739 systemd-logind[1426]: New session 22 of user core. May 16 23:49:18.744016 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 23:49:18.868813 sshd[4157]: Connection closed by 10.0.0.1 port 57422 May 16 23:49:18.869470 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 16 23:49:18.877716 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:57422.service: Deactivated successfully. May 16 23:49:18.879729 systemd[1]: session-22.scope: Deactivated successfully. May 16 23:49:18.882293 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. May 16 23:49:18.893186 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:57428.service - OpenSSH per-connection server daemon (10.0.0.1:57428). May 16 23:49:18.894853 systemd-logind[1426]: Removed session 22. May 16 23:49:18.936916 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 57428 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:18.938295 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:18.942866 systemd-logind[1426]: New session 23 of user core. May 16 23:49:18.951989 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 23:49:20.795740 containerd[1448]: time="2025-05-16T23:49:20.795324444Z" level=info msg="StopContainer for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" with timeout 30 (s)" May 16 23:49:20.795740 containerd[1448]: time="2025-05-16T23:49:20.795627258Z" level=info msg="Stop container \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" with signal terminated" May 16 23:49:20.810945 systemd[1]: cri-containerd-caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8.scope: Deactivated successfully. May 16 23:49:20.833499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8-rootfs.mount: Deactivated successfully. May 16 23:49:20.838400 containerd[1448]: time="2025-05-16T23:49:20.838259694Z" level=info msg="StopContainer for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" with timeout 2 (s)" May 16 23:49:20.838879 containerd[1448]: time="2025-05-16T23:49:20.838760036Z" level=info msg="Stop container \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" with signal terminated" May 16 23:49:20.845502 systemd-networkd[1386]: lxc_health: Link DOWN May 16 23:49:20.845507 systemd-networkd[1386]: lxc_health: Lost carrier May 16 23:49:20.847388 containerd[1448]: time="2025-05-16T23:49:20.847180326Z" level=info msg="shim disconnected" id=caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8 namespace=k8s.io May 16 23:49:20.847388 containerd[1448]: time="2025-05-16T23:49:20.847242489Z" level=warning msg="cleaning up after shim disconnected" id=caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8 namespace=k8s.io May 16 23:49:20.847388 containerd[1448]: time="2025-05-16T23:49:20.847256850Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:20.858385 containerd[1448]: time="2025-05-16T23:49:20.858316296Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 23:49:20.866609 systemd[1]: cri-containerd-97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732.scope: Deactivated successfully. May 16 23:49:20.866903 systemd[1]: cri-containerd-97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732.scope: Consumed 6.520s CPU time. May 16 23:49:20.887496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732-rootfs.mount: Deactivated successfully. May 16 23:49:20.896747 containerd[1448]: time="2025-05-16T23:49:20.895982074Z" level=info msg="shim disconnected" id=97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732 namespace=k8s.io May 16 23:49:20.896747 containerd[1448]: time="2025-05-16T23:49:20.896747668Z" level=warning msg="cleaning up after shim disconnected" id=97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732 namespace=k8s.io May 16 23:49:20.896747 containerd[1448]: time="2025-05-16T23:49:20.896756748Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:20.901033 containerd[1448]: time="2025-05-16T23:49:20.900989094Z" level=info msg="StopContainer for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" returns successfully" May 16 23:49:20.904549 containerd[1448]: time="2025-05-16T23:49:20.904493408Z" level=info msg="StopPodSandbox for \"ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4\"" May 16 23:49:20.907753 containerd[1448]: time="2025-05-16T23:49:20.907709350Z" level=info msg="Container to stop \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 23:49:20.909482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4-shm.mount: Deactivated successfully. May 16 23:49:20.916073 containerd[1448]: time="2025-05-16T23:49:20.916022476Z" level=info msg="StopContainer for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" returns successfully" May 16 23:49:20.916586 containerd[1448]: time="2025-05-16T23:49:20.916554019Z" level=info msg="StopPodSandbox for \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\"" May 16 23:49:20.916642 containerd[1448]: time="2025-05-16T23:49:20.916596941Z" level=info msg="Container to stop \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 23:49:20.916642 containerd[1448]: time="2025-05-16T23:49:20.916608622Z" level=info msg="Container to stop \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 23:49:20.916642 containerd[1448]: time="2025-05-16T23:49:20.916617102Z" level=info msg="Container to stop \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 23:49:20.916642 containerd[1448]: time="2025-05-16T23:49:20.916625262Z" level=info msg="Container to stop \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 23:49:20.916642 containerd[1448]: time="2025-05-16T23:49:20.916633623Z" level=info msg="Container to stop \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 23:49:20.919666 systemd[1]: cri-containerd-ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4.scope: Deactivated successfully. May 16 23:49:20.933386 systemd[1]: cri-containerd-105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda.scope: Deactivated successfully. May 16 23:49:20.954325 containerd[1448]: time="2025-05-16T23:49:20.954165914Z" level=info msg="shim disconnected" id=ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4 namespace=k8s.io May 16 23:49:20.954325 containerd[1448]: time="2025-05-16T23:49:20.954221317Z" level=warning msg="cleaning up after shim disconnected" id=ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4 namespace=k8s.io May 16 23:49:20.954325 containerd[1448]: time="2025-05-16T23:49:20.954238037Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:20.965005 containerd[1448]: time="2025-05-16T23:49:20.964860265Z" level=info msg="shim disconnected" id=105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda namespace=k8s.io May 16 23:49:20.965005 containerd[1448]: time="2025-05-16T23:49:20.964918347Z" level=warning msg="cleaning up after shim disconnected" id=105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda namespace=k8s.io May 16 23:49:20.965005 containerd[1448]: time="2025-05-16T23:49:20.964927348Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:20.976356 containerd[1448]: time="2025-05-16T23:49:20.976165442Z" level=info msg="TearDown network for sandbox \"ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4\" successfully" May 16 23:49:20.976356 containerd[1448]: time="2025-05-16T23:49:20.976202884Z" level=info msg="StopPodSandbox for \"ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4\" returns successfully" May 16 23:49:20.978018 containerd[1448]: time="2025-05-16T23:49:20.977980882Z" level=warning msg="cleanup warnings time=\"2025-05-16T23:49:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 23:49:20.979113 containerd[1448]: time="2025-05-16T23:49:20.978916843Z" level=info msg="TearDown network for sandbox \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" successfully" May 16 23:49:20.979113 containerd[1448]: time="2025-05-16T23:49:20.978942005Z" level=info msg="StopPodSandbox for \"105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda\" returns successfully" May 16 23:49:21.002060 kubelet[2519]: I0516 23:49:21.001918 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b85121e4-7675-4d3c-bb3c-bca5f73e213c-cilium-config-path\") pod \"b85121e4-7675-4d3c-bb3c-bca5f73e213c\" (UID: \"b85121e4-7675-4d3c-bb3c-bca5f73e213c\") " May 16 23:49:21.002060 kubelet[2519]: I0516 23:49:21.001967 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhdfz\" (UniqueName: \"kubernetes.io/projected/b85121e4-7675-4d3c-bb3c-bca5f73e213c-kube-api-access-hhdfz\") pod \"b85121e4-7675-4d3c-bb3c-bca5f73e213c\" (UID: \"b85121e4-7675-4d3c-bb3c-bca5f73e213c\") " May 16 23:49:21.012851 kubelet[2519]: I0516 23:49:21.009162 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b85121e4-7675-4d3c-bb3c-bca5f73e213c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b85121e4-7675-4d3c-bb3c-bca5f73e213c" (UID: "b85121e4-7675-4d3c-bb3c-bca5f73e213c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 23:49:21.013942 kubelet[2519]: I0516 23:49:21.013896 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b85121e4-7675-4d3c-bb3c-bca5f73e213c-kube-api-access-hhdfz" (OuterVolumeSpecName: "kube-api-access-hhdfz") pod "b85121e4-7675-4d3c-bb3c-bca5f73e213c" (UID: "b85121e4-7675-4d3c-bb3c-bca5f73e213c"). InnerVolumeSpecName "kube-api-access-hhdfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 23:49:21.102562 kubelet[2519]: I0516 23:49:21.102435 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssw47\" (UniqueName: \"kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-kube-api-access-ssw47\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102562 kubelet[2519]: I0516 23:49:21.102478 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cni-path\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102562 kubelet[2519]: I0516 23:49:21.102496 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-cgroup\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102562 kubelet[2519]: I0516 23:49:21.102515 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-hubble-tls\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102562 kubelet[2519]: I0516 23:49:21.102532 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d6f061c-e01b-4913-8352-4775fa6bc524-clustermesh-secrets\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102562 kubelet[2519]: I0516 23:49:21.102545 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-hostproc\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102779 kubelet[2519]: I0516 23:49:21.102566 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-xtables-lock\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102779 kubelet[2519]: I0516 23:49:21.102584 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-config-path\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102779 kubelet[2519]: I0516 23:49:21.102598 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-kernel\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102779 kubelet[2519]: I0516 23:49:21.102613 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-bpf-maps\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102779 kubelet[2519]: I0516 23:49:21.102627 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-lib-modules\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102779 kubelet[2519]: I0516 23:49:21.102641 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-etc-cni-netd\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102933 kubelet[2519]: I0516 23:49:21.102655 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-net\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102933 kubelet[2519]: I0516 23:49:21.102670 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-run\") pod \"0d6f061c-e01b-4913-8352-4775fa6bc524\" (UID: \"0d6f061c-e01b-4913-8352-4775fa6bc524\") " May 16 23:49:21.102933 kubelet[2519]: I0516 23:49:21.102701 2519 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b85121e4-7675-4d3c-bb3c-bca5f73e213c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.102933 kubelet[2519]: I0516 23:49:21.102713 2519 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhdfz\" (UniqueName: \"kubernetes.io/projected/b85121e4-7675-4d3c-bb3c-bca5f73e213c-kube-api-access-hhdfz\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.102933 kubelet[2519]: I0516 23:49:21.102759 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.104764 kubelet[2519]: I0516 23:49:21.103078 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.104764 kubelet[2519]: I0516 23:49:21.103119 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.104764 kubelet[2519]: I0516 23:49:21.103134 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.104764 kubelet[2519]: I0516 23:49:21.103548 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.104764 kubelet[2519]: I0516 23:49:21.103582 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.105519 kubelet[2519]: I0516 23:49:21.105484 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 23:49:21.105519 kubelet[2519]: I0516 23:49:21.105499 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-kube-api-access-ssw47" (OuterVolumeSpecName: "kube-api-access-ssw47") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "kube-api-access-ssw47". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 23:49:21.105599 kubelet[2519]: I0516 23:49:21.105530 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.105599 kubelet[2519]: I0516 23:49:21.105534 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.105599 kubelet[2519]: I0516 23:49:21.105546 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.105599 kubelet[2519]: I0516 23:49:21.105560 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 23:49:21.105748 kubelet[2519]: I0516 23:49:21.105725 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 23:49:21.106011 kubelet[2519]: I0516 23:49:21.105966 2519 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6f061c-e01b-4913-8352-4775fa6bc524-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d6f061c-e01b-4913-8352-4775fa6bc524" (UID: "0d6f061c-e01b-4913-8352-4775fa6bc524"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 23:49:21.203130 kubelet[2519]: I0516 23:49:21.203079 2519 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203130 kubelet[2519]: I0516 23:49:21.203115 2519 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d6f061c-e01b-4913-8352-4775fa6bc524-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203130 kubelet[2519]: I0516 23:49:21.203126 2519 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203130 kubelet[2519]: I0516 23:49:21.203135 2519 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203130 kubelet[2519]: I0516 23:49:21.203143 2519 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203152 2519 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203160 2519 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203168 2519 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203175 2519 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203183 2519 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203190 2519 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203197 2519 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssw47\" (UniqueName: \"kubernetes.io/projected/0d6f061c-e01b-4913-8352-4775fa6bc524-kube-api-access-ssw47\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203370 kubelet[2519]: I0516 23:49:21.203205 2519 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.203529 kubelet[2519]: I0516 23:49:21.203214 2519 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d6f061c-e01b-4913-8352-4775fa6bc524-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 23:49:21.552025 systemd[1]: Removed slice kubepods-burstable-pod0d6f061c_e01b_4913_8352_4775fa6bc524.slice - libcontainer container kubepods-burstable-pod0d6f061c_e01b_4913_8352_4775fa6bc524.slice. May 16 23:49:21.552118 systemd[1]: kubepods-burstable-pod0d6f061c_e01b_4913_8352_4775fa6bc524.slice: Consumed 6.663s CPU time. May 16 23:49:21.553430 systemd[1]: Removed slice kubepods-besteffort-podb85121e4_7675_4d3c_bb3c_bca5f73e213c.slice - libcontainer container kubepods-besteffort-podb85121e4_7675_4d3c_bb3c_bca5f73e213c.slice. May 16 23:49:21.749039 kubelet[2519]: I0516 23:49:21.748987 2519 scope.go:117] "RemoveContainer" containerID="caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8" May 16 23:49:21.751162 containerd[1448]: time="2025-05-16T23:49:21.750840364Z" level=info msg="RemoveContainer for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\"" May 16 23:49:21.755306 containerd[1448]: time="2025-05-16T23:49:21.755126789Z" level=info msg="RemoveContainer for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" returns successfully" May 16 23:49:21.755543 kubelet[2519]: I0516 23:49:21.755484 2519 scope.go:117] "RemoveContainer" containerID="caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8" May 16 23:49:21.755741 containerd[1448]: time="2025-05-16T23:49:21.755653292Z" level=error msg="ContainerStatus for \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\": not found" May 16 23:49:21.755935 kubelet[2519]: E0516 23:49:21.755867 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\": not found" containerID="caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8" May 16 23:49:21.766539 kubelet[2519]: I0516 23:49:21.766353 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8"} err="failed to get container status \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"caa3d2f7b00b12b893b88ed0487a1df66c7d852573daf023f9d69241c26c97f8\": not found" May 16 23:49:21.766539 kubelet[2519]: I0516 23:49:21.766453 2519 scope.go:117] "RemoveContainer" containerID="97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732" May 16 23:49:21.768929 containerd[1448]: time="2025-05-16T23:49:21.767459680Z" level=info msg="RemoveContainer for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\"" May 16 23:49:21.772847 containerd[1448]: time="2025-05-16T23:49:21.772712626Z" level=info msg="RemoveContainer for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" returns successfully" May 16 23:49:21.773119 kubelet[2519]: I0516 23:49:21.773005 2519 scope.go:117] "RemoveContainer" containerID="1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2" May 16 23:49:21.775326 containerd[1448]: time="2025-05-16T23:49:21.774820636Z" level=info msg="RemoveContainer for \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\"" May 16 23:49:21.777923 containerd[1448]: time="2025-05-16T23:49:21.777481911Z" level=info msg="RemoveContainer for \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\" returns successfully" May 16 23:49:21.778057 kubelet[2519]: I0516 23:49:21.778013 2519 scope.go:117] "RemoveContainer" containerID="5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02" May 16 23:49:21.779121 containerd[1448]: time="2025-05-16T23:49:21.779094740Z" level=info msg="RemoveContainer for \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\"" May 16 23:49:21.781533 containerd[1448]: time="2025-05-16T23:49:21.781506244Z" level=info msg="RemoveContainer for \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\" returns successfully" May 16 23:49:21.781713 kubelet[2519]: I0516 23:49:21.781693 2519 scope.go:117] "RemoveContainer" containerID="f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e" May 16 23:49:21.783265 containerd[1448]: time="2025-05-16T23:49:21.783236199Z" level=info msg="RemoveContainer for \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\"" May 16 23:49:21.791509 containerd[1448]: time="2025-05-16T23:49:21.791472033Z" level=info msg="RemoveContainer for \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\" returns successfully" May 16 23:49:21.791722 kubelet[2519]: I0516 23:49:21.791644 2519 scope.go:117] "RemoveContainer" containerID="c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4" May 16 23:49:21.793268 containerd[1448]: time="2025-05-16T23:49:21.793231549Z" level=info msg="RemoveContainer for \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\"" May 16 23:49:21.800316 containerd[1448]: time="2025-05-16T23:49:21.798096958Z" level=info msg="RemoveContainer for \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\" returns successfully" May 16 23:49:21.800565 kubelet[2519]: I0516 23:49:21.799871 2519 scope.go:117] "RemoveContainer" containerID="97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732" May 16 23:49:21.804770 containerd[1448]: time="2025-05-16T23:49:21.800665469Z" level=error msg="ContainerStatus for \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\": not found" May 16 23:49:21.805093 kubelet[2519]: E0516 23:49:21.804727 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\": not found" containerID="97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732" May 16 23:49:21.805093 kubelet[2519]: I0516 23:49:21.804756 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732"} err="failed to get container status \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\": rpc error: code = NotFound desc = an error occurred when try to find container \"97f4c60a9df9148e8363ef037423631a09ccc8c177a67e5c7efeb62414561732\": not found" May 16 23:49:21.805093 kubelet[2519]: I0516 23:49:21.804776 2519 scope.go:117] "RemoveContainer" containerID="1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2" May 16 23:49:21.805217 containerd[1448]: time="2025-05-16T23:49:21.805012816Z" level=error msg="ContainerStatus for \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\": not found" May 16 23:49:21.805518 kubelet[2519]: E0516 23:49:21.805477 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\": not found" containerID="1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2" May 16 23:49:21.805518 kubelet[2519]: I0516 23:49:21.805507 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2"} err="failed to get container status \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c46e7bfdd6fd14c0592a0d126f5b2b3bbfa606494348b17a550c2de947fa7f2\": not found" May 16 23:49:21.805617 kubelet[2519]: I0516 23:49:21.805524 2519 scope.go:117] "RemoveContainer" containerID="5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02" May 16 23:49:21.805716 containerd[1448]: time="2025-05-16T23:49:21.805686045Z" level=error msg="ContainerStatus for \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\": not found" May 16 23:49:21.806043 kubelet[2519]: E0516 23:49:21.806018 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\": not found" containerID="5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02" May 16 23:49:21.806181 kubelet[2519]: I0516 23:49:21.806050 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02"} err="failed to get container status \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\": rpc error: code = NotFound desc = an error occurred when try to find container \"5014421dd144518e73e2b733e3229cbc3b66eaf8bc042e19d17dcf3e2c122a02\": not found" May 16 23:49:21.806221 kubelet[2519]: I0516 23:49:21.806185 2519 scope.go:117] "RemoveContainer" containerID="f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e" May 16 23:49:21.806409 containerd[1448]: time="2025-05-16T23:49:21.806378795Z" level=error msg="ContainerStatus for \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\": not found" May 16 23:49:21.806581 kubelet[2519]: E0516 23:49:21.806560 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\": not found" containerID="f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e" May 16 23:49:21.806619 kubelet[2519]: I0516 23:49:21.806588 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e"} err="failed to get container status \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6b073a7fe7d38ba5ec8a8ff95faccc5ec174c5e12b7484acd7473b23669fc5e\": not found" May 16 23:49:21.806619 kubelet[2519]: I0516 23:49:21.806604 2519 scope.go:117] "RemoveContainer" containerID="c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4" May 16 23:49:21.806859 containerd[1448]: time="2025-05-16T23:49:21.806830814Z" level=error msg="ContainerStatus for \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\": not found" May 16 23:49:21.806967 kubelet[2519]: E0516 23:49:21.806945 2519 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\": not found" containerID="c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4" May 16 23:49:21.807013 kubelet[2519]: I0516 23:49:21.806972 2519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4"} err="failed to get container status \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c59e65b7f657b8061df40f606c596f7f174c11d169a73ba3715f2ab3dc57fed4\": not found" May 16 23:49:21.817139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec2c8bfd98619a4d09bf290353aa3ca360f7ae564d2cd2b3ebda4cf1939cfbc4-rootfs.mount: Deactivated successfully. May 16 23:49:21.817235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda-rootfs.mount: Deactivated successfully. May 16 23:49:21.817304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-105edf3638530067eebb4e95c4977d83d0cd26a3e4195e91314273bcf34a9bda-shm.mount: Deactivated successfully. May 16 23:49:21.817361 systemd[1]: var-lib-kubelet-pods-b85121e4\x2d7675\x2d4d3c\x2dbb3c\x2dbca5f73e213c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhdfz.mount: Deactivated successfully. May 16 23:49:21.817421 systemd[1]: var-lib-kubelet-pods-0d6f061c\x2de01b\x2d4913\x2d8352\x2d4775fa6bc524-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssw47.mount: Deactivated successfully. May 16 23:49:21.817471 systemd[1]: var-lib-kubelet-pods-0d6f061c\x2de01b\x2d4913\x2d8352\x2d4775fa6bc524-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 23:49:21.817517 systemd[1]: var-lib-kubelet-pods-0d6f061c\x2de01b\x2d4913\x2d8352\x2d4775fa6bc524-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 23:49:22.616973 kubelet[2519]: E0516 23:49:22.616867 2519 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 23:49:22.728776 sshd[4171]: Connection closed by 10.0.0.1 port 57428 May 16 23:49:22.729360 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 16 23:49:22.745036 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:57428.service: Deactivated successfully. May 16 23:49:22.746360 systemd[1]: session-23.scope: Deactivated successfully. May 16 23:49:22.746508 systemd[1]: session-23.scope: Consumed 1.123s CPU time. May 16 23:49:22.747585 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. May 16 23:49:22.748755 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:34754.service - OpenSSH per-connection server daemon (10.0.0.1:34754). May 16 23:49:22.750336 systemd-logind[1426]: Removed session 23. May 16 23:49:22.811941 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 34754 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:22.813024 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:22.816877 systemd-logind[1426]: New session 24 of user core. May 16 23:49:22.823925 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 23:49:23.547497 kubelet[2519]: I0516 23:49:23.546683 2519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6f061c-e01b-4913-8352-4775fa6bc524" path="/var/lib/kubelet/pods/0d6f061c-e01b-4913-8352-4775fa6bc524/volumes" May 16 23:49:23.547497 kubelet[2519]: I0516 23:49:23.547234 2519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b85121e4-7675-4d3c-bb3c-bca5f73e213c" path="/var/lib/kubelet/pods/b85121e4-7675-4d3c-bb3c-bca5f73e213c/volumes" May 16 23:49:23.664606 sshd[4332]: Connection closed by 10.0.0.1 port 34754 May 16 23:49:23.667240 sshd-session[4330]: pam_unix(sshd:session): session closed for user core May 16 23:49:23.675993 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:34754.service: Deactivated successfully. May 16 23:49:23.678118 systemd[1]: session-24.scope: Deactivated successfully. May 16 23:49:23.680577 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. May 16 23:49:23.690184 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:34760.service - OpenSSH per-connection server daemon (10.0.0.1:34760). May 16 23:49:23.692342 systemd-logind[1426]: Removed session 24. May 16 23:49:23.705712 kubelet[2519]: I0516 23:49:23.705667 2519 memory_manager.go:355] "RemoveStaleState removing state" podUID="b85121e4-7675-4d3c-bb3c-bca5f73e213c" containerName="cilium-operator" May 16 23:49:23.705712 kubelet[2519]: I0516 23:49:23.705710 2519 memory_manager.go:355] "RemoveStaleState removing state" podUID="0d6f061c-e01b-4913-8352-4775fa6bc524" containerName="cilium-agent" May 16 23:49:23.721126 systemd[1]: Created slice kubepods-burstable-podb59828ac_bfb4_44e9_a423_be361cbc1584.slice - libcontainer container kubepods-burstable-podb59828ac_bfb4_44e9_a423_be361cbc1584.slice. May 16 23:49:23.744390 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 34760 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:23.746167 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:23.750263 systemd-logind[1426]: New session 25 of user core. May 16 23:49:23.756003 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 23:49:23.808273 sshd[4345]: Connection closed by 10.0.0.1 port 34760 May 16 23:49:23.808768 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 16 23:49:23.817230 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:34760.service: Deactivated successfully. May 16 23:49:23.818614 kubelet[2519]: I0516 23:49:23.818575 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-hostproc\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.818699 kubelet[2519]: I0516 23:49:23.818640 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-cilium-cgroup\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.818699 kubelet[2519]: I0516 23:49:23.818683 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-host-proc-sys-kernel\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.818829 kubelet[2519]: I0516 23:49:23.818707 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b59828ac-bfb4-44e9-a423-be361cbc1584-hubble-tls\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.818829 kubelet[2519]: I0516 23:49:23.818730 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b59828ac-bfb4-44e9-a423-be361cbc1584-cilium-ipsec-secrets\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.818829 kubelet[2519]: I0516 23:49:23.818760 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b59828ac-bfb4-44e9-a423-be361cbc1584-clustermesh-secrets\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.818829 kubelet[2519]: I0516 23:49:23.818781 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzpp\" (UniqueName: \"kubernetes.io/projected/b59828ac-bfb4-44e9-a423-be361cbc1584-kube-api-access-9jzpp\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819173 kubelet[2519]: I0516 23:49:23.818908 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-lib-modules\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819173 kubelet[2519]: I0516 23:49:23.818930 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-xtables-lock\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819173 kubelet[2519]: I0516 23:49:23.818956 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b59828ac-bfb4-44e9-a423-be361cbc1584-cilium-config-path\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819173 kubelet[2519]: I0516 23:49:23.818974 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-cni-path\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819173 kubelet[2519]: I0516 23:49:23.818994 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-bpf-maps\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819173 kubelet[2519]: I0516 23:49:23.819012 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-cilium-run\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819083 systemd[1]: session-25.scope: Deactivated successfully. May 16 23:49:23.819422 kubelet[2519]: I0516 23:49:23.819048 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-etc-cni-netd\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819422 kubelet[2519]: I0516 23:49:23.819067 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b59828ac-bfb4-44e9-a423-be361cbc1584-host-proc-sys-net\") pod \"cilium-64xsb\" (UID: \"b59828ac-bfb4-44e9-a423-be361cbc1584\") " pod="kube-system/cilium-64xsb" May 16 23:49:23.819799 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. May 16 23:49:23.821750 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:34766.service - OpenSSH per-connection server daemon (10.0.0.1:34766). May 16 23:49:23.822720 systemd-logind[1426]: Removed session 25. May 16 23:49:23.861809 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 34766 ssh2: RSA SHA256:qbe6Tf26uiE90UJ4xf+VALHK0eRUUG/A+SAKyiAr2hk May 16 23:49:23.863067 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 23:49:23.867283 systemd-logind[1426]: New session 26 of user core. May 16 23:49:23.875944 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 23:49:24.026276 kubelet[2519]: E0516 23:49:24.026003 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:24.027090 containerd[1448]: time="2025-05-16T23:49:24.026632568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64xsb,Uid:b59828ac-bfb4-44e9-a423-be361cbc1584,Namespace:kube-system,Attempt:0,}" May 16 23:49:24.049303 containerd[1448]: time="2025-05-16T23:49:24.049207277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 23:49:24.049452 containerd[1448]: time="2025-05-16T23:49:24.049255279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 23:49:24.049452 containerd[1448]: time="2025-05-16T23:49:24.049266560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:49:24.049452 containerd[1448]: time="2025-05-16T23:49:24.049331402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 23:49:24.065973 systemd[1]: Started cri-containerd-8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77.scope - libcontainer container 8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77. May 16 23:49:24.084776 containerd[1448]: time="2025-05-16T23:49:24.084739430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64xsb,Uid:b59828ac-bfb4-44e9-a423-be361cbc1584,Namespace:kube-system,Attempt:0,} returns sandbox id \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\"" May 16 23:49:24.085634 kubelet[2519]: E0516 23:49:24.085607 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:24.089633 containerd[1448]: time="2025-05-16T23:49:24.089433819Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 23:49:24.099436 containerd[1448]: time="2025-05-16T23:49:24.099264615Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6\"" May 16 23:49:24.100959 containerd[1448]: time="2025-05-16T23:49:24.100728554Z" level=info msg="StartContainer for \"6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6\"" May 16 23:49:24.122970 systemd[1]: Started cri-containerd-6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6.scope - libcontainer container 6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6. May 16 23:49:24.146381 containerd[1448]: time="2025-05-16T23:49:24.146272030Z" level=info msg="StartContainer for \"6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6\" returns successfully" May 16 23:49:24.168875 systemd[1]: cri-containerd-6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6.scope: Deactivated successfully. May 16 23:49:24.219991 containerd[1448]: time="2025-05-16T23:49:24.219906717Z" level=info msg="shim disconnected" id=6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6 namespace=k8s.io May 16 23:49:24.219991 containerd[1448]: time="2025-05-16T23:49:24.219964600Z" level=warning msg="cleaning up after shim disconnected" id=6077d3fb44e96b9fa1eedca6f7281ba1f3085070fb7d88043f97958fe1af66d6 namespace=k8s.io May 16 23:49:24.219991 containerd[1448]: time="2025-05-16T23:49:24.219975480Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:24.761801 kubelet[2519]: E0516 23:49:24.761760 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:24.766820 containerd[1448]: time="2025-05-16T23:49:24.766343500Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 23:49:24.775149 containerd[1448]: time="2025-05-16T23:49:24.775051811Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c\"" May 16 23:49:24.776841 containerd[1448]: time="2025-05-16T23:49:24.776616394Z" level=info msg="StartContainer for \"572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c\"" May 16 23:49:24.803957 systemd[1]: Started cri-containerd-572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c.scope - libcontainer container 572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c. May 16 23:49:24.825511 containerd[1448]: time="2025-05-16T23:49:24.825404801Z" level=info msg="StartContainer for \"572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c\" returns successfully" May 16 23:49:24.831252 systemd[1]: cri-containerd-572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c.scope: Deactivated successfully. May 16 23:49:24.851395 containerd[1448]: time="2025-05-16T23:49:24.851239082Z" level=info msg="shim disconnected" id=572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c namespace=k8s.io May 16 23:49:24.851395 containerd[1448]: time="2025-05-16T23:49:24.851288764Z" level=warning msg="cleaning up after shim disconnected" id=572e2821bf6bdf6496469592df2d593bfa34fd7c74c7d879e68b75612a36a18c namespace=k8s.io May 16 23:49:24.851395 containerd[1448]: time="2025-05-16T23:49:24.851296364Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:25.764484 kubelet[2519]: E0516 23:49:25.764450 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:25.768865 containerd[1448]: time="2025-05-16T23:49:25.768825448Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 23:49:25.791494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033457680.mount: Deactivated successfully. May 16 23:49:25.791971 containerd[1448]: time="2025-05-16T23:49:25.791928799Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2\"" May 16 23:49:25.792652 containerd[1448]: time="2025-05-16T23:49:25.792587785Z" level=info msg="StartContainer for \"0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2\"" May 16 23:49:25.820961 systemd[1]: Started cri-containerd-0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2.scope - libcontainer container 0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2. May 16 23:49:25.851752 containerd[1448]: time="2025-05-16T23:49:25.851683276Z" level=info msg="StartContainer for \"0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2\" returns successfully" May 16 23:49:25.852084 systemd[1]: cri-containerd-0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2.scope: Deactivated successfully. May 16 23:49:25.874425 containerd[1448]: time="2025-05-16T23:49:25.874212365Z" level=info msg="shim disconnected" id=0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2 namespace=k8s.io May 16 23:49:25.874425 containerd[1448]: time="2025-05-16T23:49:25.874263607Z" level=warning msg="cleaning up after shim disconnected" id=0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2 namespace=k8s.io May 16 23:49:25.874425 containerd[1448]: time="2025-05-16T23:49:25.874272727Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:25.883864 containerd[1448]: time="2025-05-16T23:49:25.883803783Z" level=warning msg="cleanup warnings time=\"2025-05-16T23:49:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 23:49:25.925165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd312a48a239f745ee0e122735543fb01574b1ef283963e3f811d1b0d2037f2-rootfs.mount: Deactivated successfully. May 16 23:49:26.768281 kubelet[2519]: E0516 23:49:26.768253 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:26.770622 containerd[1448]: time="2025-05-16T23:49:26.770586889Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 23:49:26.783627 containerd[1448]: time="2025-05-16T23:49:26.783580751Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f\"" May 16 23:49:26.784990 containerd[1448]: time="2025-05-16T23:49:26.784959884Z" level=info msg="StartContainer for \"fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f\"" May 16 23:49:26.817980 systemd[1]: Started cri-containerd-fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f.scope - libcontainer container fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f. May 16 23:49:26.837272 systemd[1]: cri-containerd-fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f.scope: Deactivated successfully. May 16 23:49:26.839901 containerd[1448]: time="2025-05-16T23:49:26.839808963Z" level=info msg="StartContainer for \"fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f\" returns successfully" May 16 23:49:26.858106 containerd[1448]: time="2025-05-16T23:49:26.858055187Z" level=info msg="shim disconnected" id=fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f namespace=k8s.io May 16 23:49:26.858413 containerd[1448]: time="2025-05-16T23:49:26.858265275Z" level=warning msg="cleaning up after shim disconnected" id=fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f namespace=k8s.io May 16 23:49:26.858413 containerd[1448]: time="2025-05-16T23:49:26.858279596Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 23:49:26.924760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc3786ed1efafe2e9b1ec022678ec3f84071fefee016ba0d310f9efa52ae032f-rootfs.mount: Deactivated successfully. May 16 23:49:27.618291 kubelet[2519]: E0516 23:49:27.618241 2519 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 23:49:27.772115 kubelet[2519]: E0516 23:49:27.772071 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:27.774342 containerd[1448]: time="2025-05-16T23:49:27.774288994Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 23:49:27.796638 containerd[1448]: time="2025-05-16T23:49:27.796586197Z" level=info msg="CreateContainer within sandbox \"8574e685464c6c8361902cbaac3e93dd7a06e6cd5d3f5b31e382b775a6de1e77\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80f302ae1282b58fdf4e14540640d504b2b08897e8f2a2cb2afb90d02c7af26f\"" May 16 23:49:27.797280 containerd[1448]: time="2025-05-16T23:49:27.797189860Z" level=info msg="StartContainer for \"80f302ae1282b58fdf4e14540640d504b2b08897e8f2a2cb2afb90d02c7af26f\"" May 16 23:49:27.820963 systemd[1]: Started cri-containerd-80f302ae1282b58fdf4e14540640d504b2b08897e8f2a2cb2afb90d02c7af26f.scope - libcontainer container 80f302ae1282b58fdf4e14540640d504b2b08897e8f2a2cb2afb90d02c7af26f. May 16 23:49:27.846749 containerd[1448]: time="2025-05-16T23:49:27.846707613Z" level=info msg="StartContainer for \"80f302ae1282b58fdf4e14540640d504b2b08897e8f2a2cb2afb90d02c7af26f\" returns successfully" May 16 23:49:28.111412 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 16 23:49:28.777927 kubelet[2519]: E0516 23:49:28.777891 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:28.793782 kubelet[2519]: I0516 23:49:28.793697 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-64xsb" podStartSLOduration=5.793678891 podStartE2EDuration="5.793678891s" podCreationTimestamp="2025-05-16 23:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 23:49:28.793403361 +0000 UTC m=+81.338038157" watchObservedRunningTime="2025-05-16 23:49:28.793678891 +0000 UTC m=+81.338313687" May 16 23:49:29.553478 kubelet[2519]: I0516 23:49:29.553399 2519 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T23:49:29Z","lastTransitionTime":"2025-05-16T23:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 23:49:30.026772 kubelet[2519]: E0516 23:49:30.026737 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:31.050367 systemd-networkd[1386]: lxc_health: Link UP May 16 23:49:31.058372 systemd-networkd[1386]: lxc_health: Gained carrier May 16 23:49:32.027827 kubelet[2519]: E0516 23:49:32.027495 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:32.254927 systemd-networkd[1386]: lxc_health: Gained IPv6LL May 16 23:49:32.784627 kubelet[2519]: E0516 23:49:32.784511 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:33.786070 kubelet[2519]: E0516 23:49:33.786027 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:34.454029 systemd[1]: run-containerd-runc-k8s.io-80f302ae1282b58fdf4e14540640d504b2b08897e8f2a2cb2afb90d02c7af26f-runc.JaLr2A.mount: Deactivated successfully. May 16 23:49:34.545127 kubelet[2519]: E0516 23:49:34.545098 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 23:49:36.641887 sshd[4353]: Connection closed by 10.0.0.1 port 34766 May 16 23:49:36.642782 sshd-session[4351]: pam_unix(sshd:session): session closed for user core May 16 23:49:36.646474 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:34766.service: Deactivated successfully. May 16 23:49:36.648462 systemd[1]: session-26.scope: Deactivated successfully. May 16 23:49:36.650518 systemd-logind[1426]: Session 26 logged out. Waiting for processes to exit. May 16 23:49:36.651516 systemd-logind[1426]: Removed session 26.