May 9 23:43:51.909193 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:43:51.909217 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 9 22:24:49 -00 2025 May 9 23:43:51.909227 kernel: KASLR enabled May 9 23:43:51.909233 kernel: efi: EFI v2.7 by EDK II May 9 23:43:51.909239 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 9 23:43:51.909245 kernel: random: crng init done May 9 23:43:51.909252 kernel: secureboot: Secure boot disabled May 9 23:43:51.909258 kernel: ACPI: Early table checksum verification disabled May 9 23:43:51.909264 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 23:43:51.909272 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 23:43:51.909278 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909284 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909291 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909297 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909305 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909313 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909319 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909326 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909332 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:43:51.909339 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 23:43:51.909345 kernel: NUMA: Failed to initialise from firmware May 9 23:43:51.909352 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:43:51.909358 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 9 23:43:51.909365 kernel: Zone ranges: May 9 23:43:51.909371 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:43:51.909379 kernel: DMA32 empty May 9 23:43:51.909385 kernel: Normal empty May 9 23:43:51.909392 kernel: Movable zone start for each node May 9 23:43:51.909398 kernel: Early memory node ranges May 9 23:43:51.909405 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 9 23:43:51.909412 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 23:43:51.909419 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 23:43:51.909425 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 23:43:51.909432 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 23:43:51.909438 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 23:43:51.909444 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 23:43:51.909451 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:43:51.909459 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 23:43:51.909465 kernel: psci: probing for conduit method from ACPI. May 9 23:43:51.909496 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:43:51.909508 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:43:51.909515 kernel: psci: Trusted OS migration not required May 9 23:43:51.909522 kernel: psci: SMC Calling Convention v1.1 May 9 23:43:51.909530 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 23:43:51.909537 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:43:51.909544 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:43:51.909551 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 23:43:51.909558 kernel: Detected PIPT I-cache on CPU0 May 9 23:43:51.909565 kernel: CPU features: detected: GIC system register CPU interface May 9 23:43:51.909572 kernel: CPU features: detected: Hardware dirty bit management May 9 23:43:51.909579 kernel: CPU features: detected: Spectre-v4 May 9 23:43:51.909585 kernel: CPU features: detected: Spectre-BHB May 9 23:43:51.909592 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:43:51.909601 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:43:51.909608 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:43:51.909615 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:43:51.909621 kernel: alternatives: applying boot alternatives May 9 23:43:51.909629 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:43:51.909636 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:43:51.909643 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:43:51.909650 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:43:51.909657 kernel: Fallback order for Node 0: 0 May 9 23:43:51.909664 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 23:43:51.909670 kernel: Policy zone: DMA May 9 23:43:51.909679 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:43:51.909686 kernel: software IO TLB: area num 4. May 9 23:43:51.909693 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 23:43:51.909700 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 9 23:43:51.909707 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 23:43:51.909714 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:43:51.909727 kernel: rcu: RCU event tracing is enabled. May 9 23:43:51.909735 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 23:43:51.909742 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:43:51.909749 kernel: Tracing variant of Tasks RCU enabled. May 9 23:43:51.909756 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:43:51.909763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 23:43:51.909771 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:43:51.909778 kernel: GICv3: 256 SPIs implemented May 9 23:43:51.909785 kernel: GICv3: 0 Extended SPIs implemented May 9 23:43:51.909792 kernel: Root IRQ handler: gic_handle_irq May 9 23:43:51.909798 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:43:51.909805 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 23:43:51.909812 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 23:43:51.909819 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:43:51.909826 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 23:43:51.909833 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 23:43:51.909840 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 23:43:51.909848 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:43:51.909855 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:43:51.909862 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:43:51.909869 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:43:51.909876 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:43:51.909883 kernel: arm-pv: using stolen time PV May 9 23:43:51.909890 kernel: Console: colour dummy device 80x25 May 9 23:43:51.909897 kernel: ACPI: Core revision 20230628 May 9 23:43:51.909905 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:43:51.909912 kernel: pid_max: default: 32768 minimum: 301 May 9 23:43:51.909920 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:43:51.909927 kernel: landlock: Up and running. May 9 23:43:51.909934 kernel: SELinux: Initializing. May 9 23:43:51.909941 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:43:51.909948 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:43:51.909956 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 23:43:51.909963 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:43:51.909970 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:43:51.909977 kernel: rcu: Hierarchical SRCU implementation. May 9 23:43:51.909986 kernel: rcu: Max phase no-delay instances is 400. May 9 23:43:51.909993 kernel: Platform MSI: ITS@0x8080000 domain created May 9 23:43:51.910000 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 23:43:51.910007 kernel: Remapping and enabling EFI services. May 9 23:43:51.910014 kernel: smp: Bringing up secondary CPUs ... May 9 23:43:51.910021 kernel: Detected PIPT I-cache on CPU1 May 9 23:43:51.910028 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 23:43:51.910036 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 23:43:51.910043 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:43:51.910051 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:43:51.910060 kernel: Detected PIPT I-cache on CPU2 May 9 23:43:51.910067 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 23:43:51.910079 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 23:43:51.910088 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:43:51.910096 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 23:43:51.910103 kernel: Detected PIPT I-cache on CPU3 May 9 23:43:51.910110 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 23:43:51.910121 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 23:43:51.910131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:43:51.910138 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 23:43:51.910148 kernel: smp: Brought up 1 node, 4 CPUs May 9 23:43:51.910156 kernel: SMP: Total of 4 processors activated. May 9 23:43:51.910164 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:43:51.910171 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:43:51.910179 kernel: CPU features: detected: Common not Private translations May 9 23:43:51.910187 kernel: CPU features: detected: CRC32 instructions May 9 23:43:51.910195 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 23:43:51.910204 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:43:51.910213 kernel: CPU features: detected: LSE atomic instructions May 9 23:43:51.910221 kernel: CPU features: detected: Privileged Access Never May 9 23:43:51.910229 kernel: CPU features: detected: RAS Extension Support May 9 23:43:51.910237 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 23:43:51.910245 kernel: CPU: All CPU(s) started at EL1 May 9 23:43:51.910253 kernel: alternatives: applying system-wide alternatives May 9 23:43:51.910261 kernel: devtmpfs: initialized May 9 23:43:51.910269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:43:51.910279 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 23:43:51.910287 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:43:51.910295 kernel: SMBIOS 3.0.0 present. May 9 23:43:51.910303 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 23:43:51.910311 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:43:51.910319 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:43:51.910327 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:43:51.910335 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:43:51.910343 kernel: audit: initializing netlink subsys (disabled) May 9 23:43:51.910353 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 23:43:51.910360 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:43:51.910369 kernel: cpuidle: using governor menu May 9 23:43:51.910377 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:43:51.910385 kernel: ASID allocator initialised with 32768 entries May 9 23:43:51.910393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:43:51.910400 kernel: Serial: AMBA PL011 UART driver May 9 23:43:51.910408 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:43:51.910416 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:43:51.910425 kernel: Modules: 508944 pages in range for PLT usage May 9 23:43:51.910433 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:43:51.910441 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:43:51.910449 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:43:51.910456 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:43:51.910464 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:43:51.910528 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:43:51.910539 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:43:51.910547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:43:51.910557 kernel: ACPI: Added _OSI(Module Device) May 9 23:43:51.910565 kernel: ACPI: Added _OSI(Processor Device) May 9 23:43:51.910573 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:43:51.910580 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:43:51.910588 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:43:51.910595 kernel: ACPI: Interpreter enabled May 9 23:43:51.910603 kernel: ACPI: Using GIC for interrupt routing May 9 23:43:51.910610 kernel: ACPI: MCFG table detected, 1 entries May 9 23:43:51.910618 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 23:43:51.910628 kernel: printk: console [ttyAMA0] enabled May 9 23:43:51.910636 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 23:43:51.910786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:43:51.910862 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:43:51.910927 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:43:51.910989 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 23:43:51.911052 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 23:43:51.911064 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 23:43:51.911072 kernel: PCI host bridge to bus 0000:00 May 9 23:43:51.911142 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 23:43:51.911202 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:43:51.911264 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 23:43:51.911325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 23:43:51.911406 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 23:43:51.911504 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 23:43:51.911579 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 23:43:51.911672 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 23:43:51.911777 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:43:51.911846 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:43:51.911916 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 23:43:51.911986 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 23:43:51.912050 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 23:43:51.912108 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:43:51.912169 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 23:43:51.912179 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:43:51.912187 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:43:51.912195 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:43:51.912203 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:43:51.912211 kernel: iommu: Default domain type: Translated May 9 23:43:51.912221 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:43:51.912229 kernel: efivars: Registered efivars operations May 9 23:43:51.912237 kernel: vgaarb: loaded May 9 23:43:51.912244 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:43:51.912252 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:43:51.912260 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:43:51.912267 kernel: pnp: PnP ACPI init May 9 23:43:51.912344 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 23:43:51.912357 kernel: pnp: PnP ACPI: found 1 devices May 9 23:43:51.912365 kernel: NET: Registered PF_INET protocol family May 9 23:43:51.912372 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:43:51.912380 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:43:51.912388 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:43:51.912395 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:43:51.912403 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:43:51.912410 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:43:51.912418 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:43:51.912427 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:43:51.912435 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:43:51.912442 kernel: PCI: CLS 0 bytes, default 64 May 9 23:43:51.912450 kernel: kvm [1]: HYP mode not available May 9 23:43:51.912457 kernel: Initialise system trusted keyrings May 9 23:43:51.912464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:43:51.912490 kernel: Key type asymmetric registered May 9 23:43:51.912499 kernel: Asymmetric key parser 'x509' registered May 9 23:43:51.912506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:43:51.912516 kernel: io scheduler mq-deadline registered May 9 23:43:51.912523 kernel: io scheduler kyber registered May 9 23:43:51.912531 kernel: io scheduler bfq registered May 9 23:43:51.912539 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:43:51.912546 kernel: ACPI: button: Power Button [PWRB] May 9 23:43:51.912554 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:43:51.912625 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 23:43:51.912635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:43:51.912643 kernel: thunder_xcv, ver 1.0 May 9 23:43:51.912652 kernel: thunder_bgx, ver 1.0 May 9 23:43:51.912660 kernel: nicpf, ver 1.0 May 9 23:43:51.912667 kernel: nicvf, ver 1.0 May 9 23:43:51.912748 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:43:51.912815 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:43:51 UTC (1746834231) May 9 23:43:51.912825 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:43:51.912833 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 23:43:51.912841 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:43:51.912850 kernel: watchdog: Hard watchdog permanently disabled May 9 23:43:51.912858 kernel: NET: Registered PF_INET6 protocol family May 9 23:43:51.912866 kernel: Segment Routing with IPv6 May 9 23:43:51.912873 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:43:51.912881 kernel: NET: Registered PF_PACKET protocol family May 9 23:43:51.912889 kernel: Key type dns_resolver registered May 9 23:43:51.912896 kernel: registered taskstats version 1 May 9 23:43:51.912904 kernel: Loading compiled-in X.509 certificates May 9 23:43:51.912912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce481d22c53070871912748985d4044dfd149966' May 9 23:43:51.912921 kernel: Key type .fscrypt registered May 9 23:43:51.912928 kernel: Key type fscrypt-provisioning registered May 9 23:43:51.912935 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:43:51.912943 kernel: ima: Allocated hash algorithm: sha1 May 9 23:43:51.912951 kernel: ima: No architecture policies found May 9 23:43:51.912958 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:43:51.912966 kernel: clk: Disabling unused clocks May 9 23:43:51.912974 kernel: Freeing unused kernel memory: 39744K May 9 23:43:51.912981 kernel: Run /init as init process May 9 23:43:51.912990 kernel: with arguments: May 9 23:43:51.912998 kernel: /init May 9 23:43:51.913005 kernel: with environment: May 9 23:43:51.913012 kernel: HOME=/ May 9 23:43:51.913020 kernel: TERM=linux May 9 23:43:51.913027 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:43:51.913037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:43:51.913047 systemd[1]: Detected virtualization kvm. May 9 23:43:51.913058 systemd[1]: Detected architecture arm64. May 9 23:43:51.913066 systemd[1]: Running in initrd. May 9 23:43:51.913074 systemd[1]: No hostname configured, using default hostname. May 9 23:43:51.913082 systemd[1]: Hostname set to . May 9 23:43:51.913090 systemd[1]: Initializing machine ID from VM UUID. May 9 23:43:51.913098 systemd[1]: Queued start job for default target initrd.target. May 9 23:43:51.913106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:43:51.913114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:43:51.913125 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:43:51.913133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:43:51.913141 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:43:51.913149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:43:51.913159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:43:51.913167 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:43:51.913177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:43:51.913185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:43:51.913192 systemd[1]: Reached target paths.target - Path Units. May 9 23:43:51.913200 systemd[1]: Reached target slices.target - Slice Units. May 9 23:43:51.913208 systemd[1]: Reached target swap.target - Swaps. May 9 23:43:51.913216 systemd[1]: Reached target timers.target - Timer Units. May 9 23:43:51.913224 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:43:51.913233 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:43:51.913241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:43:51.913250 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:43:51.913258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:43:51.913266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:43:51.913274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:43:51.913282 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:43:51.913290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:43:51.913299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:43:51.913307 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:43:51.913315 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:43:51.913324 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:43:51.913333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:43:51.913341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:43:51.913349 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:43:51.913358 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:43:51.913366 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:43:51.913392 systemd-journald[239]: Collecting audit messages is disabled. May 9 23:43:51.913414 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:43:51.913425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:43:51.913434 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:43:51.913443 systemd-journald[239]: Journal started May 9 23:43:51.913467 systemd-journald[239]: Runtime Journal (/run/log/journal/0101db97a9394eb5bc74762cfc7d289c) is 5.9M, max 47.3M, 41.4M free. May 9 23:43:51.904108 systemd-modules-load[240]: Inserted module 'overlay' May 9 23:43:51.917854 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:43:51.919490 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:43:51.920280 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:43:51.921642 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:43:51.925730 kernel: Bridge firewalling registered May 9 23:43:51.922948 systemd-modules-load[240]: Inserted module 'br_netfilter' May 9 23:43:51.924316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:43:51.928730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:43:51.931581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:43:51.936313 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:43:51.939290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:43:51.942225 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:43:51.943768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:43:51.955664 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:43:51.957820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:43:51.966502 dracut-cmdline[274]: dracut-dracut-053 May 9 23:43:51.969127 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 9 23:43:51.985768 systemd-resolved[276]: Positive Trust Anchors: May 9 23:43:51.985843 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:43:51.985874 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:43:51.990584 systemd-resolved[276]: Defaulting to hostname 'linux'. May 9 23:43:51.991560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:43:51.994230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:43:52.042513 kernel: SCSI subsystem initialized May 9 23:43:52.046490 kernel: Loading iSCSI transport class v2.0-870. May 9 23:43:52.056518 kernel: iscsi: registered transport (tcp) May 9 23:43:52.070499 kernel: iscsi: registered transport (qla4xxx) May 9 23:43:52.070555 kernel: QLogic iSCSI HBA Driver May 9 23:43:52.115444 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:43:52.125683 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:43:52.143898 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:43:52.143972 kernel: device-mapper: uevent: version 1.0.3 May 9 23:43:52.144736 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:43:52.189500 kernel: raid6: neonx8 gen() 15779 MB/s May 9 23:43:52.206487 kernel: raid6: neonx4 gen() 15638 MB/s May 9 23:43:52.223486 kernel: raid6: neonx2 gen() 13227 MB/s May 9 23:43:52.240485 kernel: raid6: neonx1 gen() 10483 MB/s May 9 23:43:52.257488 kernel: raid6: int64x8 gen() 6960 MB/s May 9 23:43:52.274487 kernel: raid6: int64x4 gen() 7346 MB/s May 9 23:43:52.291503 kernel: raid6: int64x2 gen() 6127 MB/s May 9 23:43:52.308491 kernel: raid6: int64x1 gen() 5052 MB/s May 9 23:43:52.308528 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s May 9 23:43:52.325500 kernel: raid6: .... xor() 11917 MB/s, rmw enabled May 9 23:43:52.325524 kernel: raid6: using neon recovery algorithm May 9 23:43:52.330567 kernel: xor: measuring software checksum speed May 9 23:43:52.330593 kernel: 8regs : 19778 MB/sec May 9 23:43:52.331620 kernel: 32regs : 19013 MB/sec May 9 23:43:52.331634 kernel: arm64_neon : 27034 MB/sec May 9 23:43:52.331652 kernel: xor: using function: arm64_neon (27034 MB/sec) May 9 23:43:52.382506 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:43:52.395315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:43:52.409668 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:43:52.421333 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 9 23:43:52.424536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:43:52.431681 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:43:52.444112 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation May 9 23:43:52.471109 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:43:52.489704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:43:52.531450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:43:52.537671 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:43:52.551972 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:43:52.554882 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:43:52.556515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:43:52.558663 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:43:52.566025 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:43:52.578580 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:43:52.582490 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 23:43:52.583648 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 23:43:52.590973 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:43:52.590997 kernel: GPT:9289727 != 19775487 May 9 23:43:52.591769 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:43:52.592895 kernel: GPT:9289727 != 19775487 May 9 23:43:52.593663 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:43:52.594597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:43:52.600216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:43:52.600336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:43:52.606713 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:43:52.608249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:43:52.608396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:43:52.609759 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:43:52.619761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:43:52.630536 kernel: BTRFS: device fsid 278061fd-7ea0-499f-a3bc-343431c2d8fa devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (515) May 9 23:43:52.630586 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) May 9 23:43:52.633506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:43:52.645487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 23:43:52.650938 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 23:43:52.655535 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 23:43:52.656826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 23:43:52.662644 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:43:52.676651 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:43:52.678550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:43:52.696820 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:43:52.723230 disk-uuid[550]: Primary Header is updated. May 9 23:43:52.723230 disk-uuid[550]: Secondary Entries is updated. May 9 23:43:52.723230 disk-uuid[550]: Secondary Header is updated. May 9 23:43:52.727497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:43:53.738504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:43:53.741977 disk-uuid[559]: The operation has completed successfully. May 9 23:43:53.763325 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:43:53.763414 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:43:53.789669 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:43:53.794376 sh[571]: Success May 9 23:43:53.805526 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:43:53.833877 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:43:53.842837 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:43:53.845284 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:43:53.853549 kernel: BTRFS info (device dm-0): first mount of filesystem 278061fd-7ea0-499f-a3bc-343431c2d8fa May 9 23:43:53.853597 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:43:53.854990 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:43:53.855599 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:43:53.855622 kernel: BTRFS info (device dm-0): using free space tree May 9 23:43:53.860149 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:43:53.861513 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:43:53.873671 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:43:53.875230 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:43:53.882951 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:43:53.882992 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:43:53.883003 kernel: BTRFS info (device vda6): using free space tree May 9 23:43:53.886596 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:43:53.893917 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:43:53.895162 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:43:53.901551 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:43:53.907809 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:43:53.979226 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:43:53.992647 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:43:53.999353 ignition[664]: Ignition 2.20.0 May 9 23:43:53.999363 ignition[664]: Stage: fetch-offline May 9 23:43:53.999400 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 9 23:43:53.999408 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:43:53.999590 ignition[664]: parsed url from cmdline: "" May 9 23:43:53.999593 ignition[664]: no config URL provided May 9 23:43:53.999598 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:43:53.999606 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 9 23:43:53.999631 ignition[664]: op(1): [started] loading QEMU firmware config module May 9 23:43:53.999636 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 23:43:54.007054 ignition[664]: op(1): [finished] loading QEMU firmware config module May 9 23:43:54.020455 systemd-networkd[764]: lo: Link UP May 9 23:43:54.020466 systemd-networkd[764]: lo: Gained carrier May 9 23:43:54.021309 systemd-networkd[764]: Enumeration completed May 9 23:43:54.021796 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:43:54.021799 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:43:54.022447 systemd-networkd[764]: eth0: Link UP May 9 23:43:54.022451 systemd-networkd[764]: eth0: Gained carrier May 9 23:43:54.022456 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:43:54.024190 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:43:54.025332 systemd[1]: Reached target network.target - Network. May 9 23:43:54.048569 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:43:54.053075 ignition[664]: parsing config with SHA512: 6559a6d7ac9b589a41c48d66a132c006ead888444f3a7002ea61c111d140da5b04a99b9bae20ce5e10a3adf7c00b143443b858d5eb68d4bcd12c70dfa57e0b82 May 9 23:43:54.057743 unknown[664]: fetched base config from "system" May 9 23:43:54.057753 unknown[664]: fetched user config from "qemu" May 9 23:43:54.058134 ignition[664]: fetch-offline: fetch-offline passed May 9 23:43:54.060773 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:43:54.058204 ignition[664]: Ignition finished successfully May 9 23:43:54.062431 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 23:43:54.079977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:43:54.098224 ignition[770]: Ignition 2.20.0 May 9 23:43:54.098234 ignition[770]: Stage: kargs May 9 23:43:54.098606 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 9 23:43:54.098617 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:43:54.099605 ignition[770]: kargs: kargs passed May 9 23:43:54.099656 ignition[770]: Ignition finished successfully May 9 23:43:54.102886 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:43:54.111643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:43:54.122699 ignition[778]: Ignition 2.20.0 May 9 23:43:54.122710 ignition[778]: Stage: disks May 9 23:43:54.122873 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 9 23:43:54.122882 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:43:54.123810 ignition[778]: disks: disks passed May 9 23:43:54.125168 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:43:54.123860 ignition[778]: Ignition finished successfully May 9 23:43:54.126372 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:43:54.127612 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:43:54.129300 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:43:54.130693 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:43:54.132270 systemd[1]: Reached target basic.target - Basic System. May 9 23:43:54.144637 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:43:54.154423 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:43:54.158017 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:43:54.171590 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:43:54.223359 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:43:54.224841 kernel: EXT4-fs (vda9): mounted filesystem caef9e74-1f21-4595-8586-7560f5103527 r/w with ordered data mode. Quota mode: none. May 9 23:43:54.224618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:43:54.233588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:43:54.235377 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:43:54.236582 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:43:54.236657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:43:54.236708 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:43:54.243009 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) May 9 23:43:54.242858 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:43:54.246422 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:43:54.246442 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:43:54.246453 kernel: BTRFS info (device vda6): using free space tree May 9 23:43:54.244639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:43:54.250517 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:43:54.250894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:43:54.282892 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:43:54.286762 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory May 9 23:43:54.292216 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:43:54.295609 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:43:54.363393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:43:54.378606 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:43:54.380138 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:43:54.385491 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:43:54.401196 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:43:54.404396 ignition[910]: INFO : Ignition 2.20.0 May 9 23:43:54.405118 ignition[910]: INFO : Stage: mount May 9 23:43:54.405118 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:43:54.405118 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:43:54.407047 ignition[910]: INFO : mount: mount passed May 9 23:43:54.407047 ignition[910]: INFO : Ignition finished successfully May 9 23:43:54.407554 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:43:54.416603 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:43:54.853317 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:43:54.867682 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:43:54.872486 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 9 23:43:54.874534 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 9 23:43:54.874554 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:43:54.874564 kernel: BTRFS info (device vda6): using free space tree May 9 23:43:54.877492 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:43:54.878026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:43:54.909219 ignition[942]: INFO : Ignition 2.20.0 May 9 23:43:54.909219 ignition[942]: INFO : Stage: files May 9 23:43:54.910517 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:43:54.910517 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:43:54.910517 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 9 23:43:54.913041 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:43:54.913041 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:43:54.915799 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:43:54.916776 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:43:54.916776 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:43:54.916306 unknown[942]: wrote ssh authorized keys file for user: core May 9 23:43:54.919563 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:43:54.919563 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 9 23:43:54.977224 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:43:55.187199 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:43:55.187199 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:43:55.190316 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 9 23:43:55.244685 systemd-networkd[764]: eth0: Gained IPv6LL May 9 23:43:55.554780 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:43:55.637505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:43:55.649642 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:43:55.649642 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:43:55.649642 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:43:55.649642 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:43:55.649642 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:43:55.649642 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 23:43:55.893755 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:43:56.256379 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:43:56.256379 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 9 23:43:56.259937 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 9 23:43:56.288998 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:43:56.293192 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:43:56.294931 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 9 23:43:56.294931 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 9 23:43:56.294931 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:43:56.294931 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:43:56.294931 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:43:56.294931 ignition[942]: INFO : files: files passed May 9 23:43:56.294931 ignition[942]: INFO : Ignition finished successfully May 9 23:43:56.297727 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:43:56.305871 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:43:56.307555 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:43:56.308903 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:43:56.308982 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:43:56.315586 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 9 23:43:56.317730 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:43:56.317730 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:43:56.320146 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:43:56.319240 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:43:56.321431 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:43:56.334699 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:43:56.353784 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:43:56.353897 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:43:56.355597 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:43:56.357995 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:43:56.358862 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:43:56.359630 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:43:56.374452 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:43:56.382680 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:43:56.391426 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:43:56.392558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:43:56.394283 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:43:56.395794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:43:56.396002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:43:56.398136 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:43:56.399790 systemd[1]: Stopped target basic.target - Basic System. May 9 23:43:56.401241 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:43:56.402683 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:43:56.404427 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:43:56.406076 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:43:56.407575 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:43:56.409427 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:43:56.411038 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:43:56.412554 systemd[1]: Stopped target swap.target - Swaps. May 9 23:43:56.413813 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:43:56.413935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:43:56.415983 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:43:56.417469 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:43:56.419116 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:43:56.420537 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:43:56.421657 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:43:56.421782 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:43:56.424032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:43:56.424148 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:43:56.429913 systemd[1]: Stopped target paths.target - Path Units. May 9 23:43:56.431180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:43:56.434554 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:43:56.435618 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:43:56.437237 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:43:56.443134 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:43:56.443229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:43:56.444456 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:43:56.444556 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:43:56.446457 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:43:56.446604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:43:56.448963 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:43:56.449073 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:43:56.459668 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:43:56.460443 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:43:56.460608 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:43:56.463182 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:43:56.464855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:43:56.464982 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:43:56.466513 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:43:56.466613 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:43:56.471605 ignition[998]: INFO : Ignition 2.20.0 May 9 23:43:56.471605 ignition[998]: INFO : Stage: umount May 9 23:43:56.473375 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:43:56.473375 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:43:56.473375 ignition[998]: INFO : umount: umount passed May 9 23:43:56.473375 ignition[998]: INFO : Ignition finished successfully May 9 23:43:56.472097 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:43:56.474134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:43:56.475612 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:43:56.475738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:43:56.478213 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:43:56.479009 systemd[1]: Stopped target network.target - Network. May 9 23:43:56.480741 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:43:56.480816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:43:56.482350 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:43:56.482389 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:43:56.483816 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:43:56.483860 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:43:56.485300 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:43:56.485341 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:43:56.487103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:43:56.488418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:43:56.490184 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:43:56.490267 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:43:56.492087 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:43:56.492169 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:43:56.497547 systemd-networkd[764]: eth0: DHCPv6 lease lost May 9 23:43:56.499122 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:43:56.499235 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:43:56.500391 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:43:56.500423 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:43:56.510633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:43:56.511584 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:43:56.511658 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:43:56.513502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:43:56.515566 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:43:56.517535 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:43:56.520399 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:43:56.520601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:43:56.522317 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:43:56.522369 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:43:56.523848 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:43:56.523888 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:43:56.526011 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:43:56.526182 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:43:56.531139 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:43:56.532505 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:43:56.534757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:43:56.534818 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:43:56.536149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:43:56.536186 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:43:56.537831 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:43:56.537885 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:43:56.540161 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:43:56.540203 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:43:56.542628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:43:56.542674 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:43:56.558681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:43:56.559742 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:43:56.559812 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:43:56.561523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:43:56.561567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:43:56.563728 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:43:56.565500 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:43:56.567331 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:43:56.569589 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:43:56.579119 systemd[1]: Switching root. May 9 23:43:56.603532 systemd-journald[239]: Journal stopped May 9 23:43:57.381746 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 9 23:43:57.381808 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:43:57.381822 kernel: SELinux: policy capability open_perms=1 May 9 23:43:57.381840 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:43:57.381850 kernel: SELinux: policy capability always_check_network=0 May 9 23:43:57.381863 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:43:57.381873 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:43:57.381884 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:43:57.381895 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:43:57.381905 kernel: audit: type=1403 audit(1746834236.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:43:57.381917 systemd[1]: Successfully loaded SELinux policy in 34ms. May 9 23:43:57.381936 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.365ms. May 9 23:43:57.381949 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:43:57.381961 systemd[1]: Detected virtualization kvm. May 9 23:43:57.381972 systemd[1]: Detected architecture arm64. May 9 23:43:57.381982 systemd[1]: Detected first boot. May 9 23:43:57.381992 systemd[1]: Initializing machine ID from VM UUID. May 9 23:43:57.382002 zram_generator::config[1045]: No configuration found. May 9 23:43:57.382014 systemd[1]: Populated /etc with preset unit settings. May 9 23:43:57.382026 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:43:57.382037 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:43:57.382051 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:43:57.382062 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:43:57.382073 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:43:57.382084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:43:57.382096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:43:57.382106 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:43:57.382116 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:43:57.382128 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:43:57.382138 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:43:57.382149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:43:57.382159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:43:57.382170 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:43:57.382181 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:43:57.382192 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:43:57.382204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:43:57.382215 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:43:57.382227 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:43:57.382238 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:43:57.382248 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:43:57.382260 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:43:57.382271 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:43:57.382282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:43:57.382293 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:43:57.382303 systemd[1]: Reached target slices.target - Slice Units. May 9 23:43:57.382315 systemd[1]: Reached target swap.target - Swaps. May 9 23:43:57.382326 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:43:57.382337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:43:57.382347 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:43:57.382358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:43:57.382368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:43:57.382378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:43:57.382388 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:43:57.382398 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:43:57.382410 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:43:57.382420 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:43:57.382430 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:43:57.382440 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:43:57.382451 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:43:57.382462 systemd[1]: Reached target machines.target - Containers. May 9 23:43:57.382483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:43:57.382497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:43:57.382510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:43:57.382520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:43:57.382531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:43:57.382542 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:43:57.382553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:43:57.382564 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:43:57.382574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:43:57.382585 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:43:57.382596 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:43:57.382608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:43:57.382619 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:43:57.382629 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:43:57.382639 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:43:57.382650 kernel: fuse: init (API version 7.39) May 9 23:43:57.382659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:43:57.382670 kernel: loop: module loaded May 9 23:43:57.382679 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:43:57.382715 systemd-journald[1105]: Collecting audit messages is disabled. May 9 23:43:57.382741 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:43:57.382753 systemd-journald[1105]: Journal started May 9 23:43:57.382776 systemd-journald[1105]: Runtime Journal (/run/log/journal/0101db97a9394eb5bc74762cfc7d289c) is 5.9M, max 47.3M, 41.4M free. May 9 23:43:57.139124 systemd[1]: Queued start job for default target multi-user.target. May 9 23:43:57.167057 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 23:43:57.167425 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:43:57.391075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:43:57.391124 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:43:57.391146 systemd[1]: Stopped verity-setup.service. May 9 23:43:57.396747 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:43:57.398023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:43:57.399014 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:43:57.400010 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:43:57.400977 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:43:57.401974 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:43:57.402937 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:43:57.404548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:43:57.406143 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:43:57.406288 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:43:57.407926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:43:57.408066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:43:57.409675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:43:57.409839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:43:57.410520 kernel: ACPI: bus type drm_connector registered May 9 23:43:57.411897 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:43:57.412025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:43:57.413189 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:43:57.413341 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:43:57.414579 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:43:57.414731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:43:57.417382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:43:57.425020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:43:57.427559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:43:57.429230 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:43:57.442146 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:43:57.461660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:43:57.463946 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:43:57.465078 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:43:57.465117 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:43:57.467137 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:43:57.469393 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:43:57.471582 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:43:57.472673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:43:57.475196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:43:57.477245 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:43:57.478565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:43:57.480652 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:43:57.481918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:43:57.483815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:43:57.486689 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:43:57.488745 systemd-journald[1105]: Time spent on flushing to /var/log/journal/0101db97a9394eb5bc74762cfc7d289c is 26.106ms for 859 entries. May 9 23:43:57.488745 systemd-journald[1105]: System Journal (/var/log/journal/0101db97a9394eb5bc74762cfc7d289c) is 8.0M, max 195.6M, 187.6M free. May 9 23:43:57.526618 systemd-journald[1105]: Received client request to flush runtime journal. May 9 23:43:57.526665 kernel: loop0: detected capacity change from 0 to 116808 May 9 23:43:57.492721 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:43:57.497512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:43:57.499713 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:43:57.501046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:43:57.502982 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:43:57.504838 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:43:57.510357 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:43:57.517921 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:43:57.521043 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:43:57.528892 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:43:57.530782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:43:57.536499 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:43:57.543921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:43:57.547062 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:43:57.551163 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:43:57.553023 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:43:57.560733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:43:57.570800 kernel: loop1: detected capacity change from 0 to 201592 May 9 23:43:57.579459 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. May 9 23:43:57.579497 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. May 9 23:43:57.584047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:43:57.605521 kernel: loop2: detected capacity change from 0 to 113536 May 9 23:43:57.644520 kernel: loop3: detected capacity change from 0 to 116808 May 9 23:43:57.649499 kernel: loop4: detected capacity change from 0 to 201592 May 9 23:43:57.655503 kernel: loop5: detected capacity change from 0 to 113536 May 9 23:43:57.658857 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 23:43:57.659238 (sd-merge)[1180]: Merged extensions into '/usr'. May 9 23:43:57.663045 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:43:57.663059 systemd[1]: Reloading... May 9 23:43:57.716500 zram_generator::config[1203]: No configuration found. May 9 23:43:57.804807 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:43:57.818578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:43:57.854556 systemd[1]: Reloading finished in 191 ms. May 9 23:43:57.886098 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:43:57.887336 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:43:57.904704 systemd[1]: Starting ensure-sysext.service... May 9 23:43:57.906463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:43:57.915226 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... May 9 23:43:57.915243 systemd[1]: Reloading... May 9 23:43:57.924078 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:43:57.924334 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:43:57.925015 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:43:57.925237 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 9 23:43:57.925279 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 9 23:43:57.930212 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:43:57.930225 systemd-tmpfiles[1241]: Skipping /boot May 9 23:43:57.938983 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:43:57.938998 systemd-tmpfiles[1241]: Skipping /boot May 9 23:43:57.955498 zram_generator::config[1268]: No configuration found. May 9 23:43:58.047584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:43:58.083035 systemd[1]: Reloading finished in 167 ms. May 9 23:43:58.098563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:43:58.106976 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:43:58.115941 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:43:58.118075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:43:58.120090 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:43:58.125779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:43:58.134584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:43:58.136912 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:43:58.140087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:43:58.141167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:43:58.147281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:43:58.156751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:43:58.157933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:43:58.159763 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:43:58.164303 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:43:58.166223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:43:58.166352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:43:58.168127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:43:58.168257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:43:58.170088 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:43:58.170621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:43:58.172252 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:43:58.183223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:43:58.185695 systemd-udevd[1314]: Using default interface naming scheme 'v255'. May 9 23:43:58.200039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:43:58.206664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:43:58.212503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:43:58.213558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:43:58.215721 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:43:58.219627 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:43:58.220569 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:43:58.222264 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:43:58.225510 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:43:58.228131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:43:58.228825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:43:58.232937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:43:58.233616 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:43:58.235929 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:43:58.236058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:43:58.254525 systemd[1]: Finished ensure-sysext.service. May 9 23:43:58.256075 augenrules[1370]: No rules May 9 23:43:58.258200 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:43:58.258370 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:43:58.260990 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:43:58.272297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:43:58.275554 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1364) May 9 23:43:58.292883 systemd-resolved[1307]: Positive Trust Anchors: May 9 23:43:58.293015 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:43:58.293048 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:43:58.295044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:43:58.298739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:43:58.301354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:43:58.303292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:43:58.303873 systemd-resolved[1307]: Defaulting to hostname 'linux'. May 9 23:43:58.304767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:43:58.306722 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:43:58.311985 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 23:43:58.313175 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:43:58.313585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:43:58.315096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:43:58.315280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:43:58.317846 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:43:58.317993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:43:58.319559 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:43:58.319713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:43:58.322069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:43:58.322212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:43:58.324991 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 23:43:58.339045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:43:58.340315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:43:58.340382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:43:58.341471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:43:58.344130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:43:58.374101 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:43:58.378521 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 23:43:58.379951 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:43:58.395248 systemd-networkd[1386]: lo: Link UP May 9 23:43:58.395261 systemd-networkd[1386]: lo: Gained carrier May 9 23:43:58.397809 systemd-networkd[1386]: Enumeration completed May 9 23:43:58.397931 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:43:58.398978 systemd[1]: Reached target network.target - Network. May 9 23:43:58.400467 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:43:58.400489 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:43:58.401676 systemd-networkd[1386]: eth0: Link UP May 9 23:43:58.401687 systemd-networkd[1386]: eth0: Gained carrier May 9 23:43:58.401709 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:43:58.408674 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:43:58.411250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:43:58.420561 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:43:58.421793 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 9 23:43:58.421845 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:43:58.422871 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 23:43:58.422921 systemd-timesyncd[1387]: Initial clock synchronization to Fri 2025-05-09 23:43:58.388617 UTC. May 9 23:43:58.424920 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:43:58.437358 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:43:58.463541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:43:58.473203 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:43:58.474488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:43:58.475287 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:43:58.476262 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:43:58.477216 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:43:58.478315 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:43:58.479236 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:43:58.480207 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:43:58.481121 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:43:58.481157 systemd[1]: Reached target paths.target - Path Units. May 9 23:43:58.481810 systemd[1]: Reached target timers.target - Timer Units. May 9 23:43:58.483233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:43:58.485563 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:43:58.493545 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:43:58.495768 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:43:58.497267 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:43:58.498248 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:43:58.499074 systemd[1]: Reached target basic.target - Basic System. May 9 23:43:58.499935 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:43:58.499969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:43:58.500976 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:43:58.502868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:43:58.504057 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:43:58.506730 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:43:58.511721 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:43:58.513327 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:43:58.513611 jq[1416]: false May 9 23:43:58.515899 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:43:58.517786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:43:58.520805 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:43:58.524189 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:43:58.527655 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:43:58.528097 extend-filesystems[1417]: Found loop3 May 9 23:43:58.529039 extend-filesystems[1417]: Found loop4 May 9 23:43:58.530224 extend-filesystems[1417]: Found loop5 May 9 23:43:58.530224 extend-filesystems[1417]: Found vda May 9 23:43:58.530224 extend-filesystems[1417]: Found vda1 May 9 23:43:58.530224 extend-filesystems[1417]: Found vda2 May 9 23:43:58.530224 extend-filesystems[1417]: Found vda3 May 9 23:43:58.530224 extend-filesystems[1417]: Found usr May 9 23:43:58.530224 extend-filesystems[1417]: Found vda4 May 9 23:43:58.530224 extend-filesystems[1417]: Found vda6 May 9 23:43:58.530224 extend-filesystems[1417]: Found vda7 May 9 23:43:58.530224 extend-filesystems[1417]: Found vda9 May 9 23:43:58.530224 extend-filesystems[1417]: Checking size of /dev/vda9 May 9 23:43:58.531816 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:43:58.534538 dbus-daemon[1415]: [system] SELinux support is enabled May 9 23:43:58.561517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1359) May 9 23:43:58.532214 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:43:58.561715 extend-filesystems[1417]: Resized partition /dev/vda9 May 9 23:43:58.532863 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:43:58.540840 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:43:58.542997 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:43:58.561815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:43:58.566308 jq[1431]: true May 9 23:43:58.568151 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) May 9 23:43:58.582621 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:43:58.582796 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:43:58.583070 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:43:58.583222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:43:58.587894 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:43:58.588081 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:43:58.599201 update_engine[1428]: I20250509 23:43:58.598506 1428 main.cc:92] Flatcar Update Engine starting May 9 23:43:58.601846 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:43:58.603754 update_engine[1428]: I20250509 23:43:58.603703 1428 update_check_scheduler.cc:74] Next update check in 10m39s May 9 23:43:58.606104 jq[1442]: true May 9 23:43:58.606502 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 23:43:58.615928 systemd[1]: Started update-engine.service - Update Engine. May 9 23:43:58.616993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:43:58.617029 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:43:58.618044 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:43:58.618068 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:43:58.624648 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:43:58.662396 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:43:58.662954 systemd-logind[1424]: New seat seat0. May 9 23:43:58.663763 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:43:58.673774 tar[1441]: linux-arm64/LICENSE May 9 23:43:58.674628 tar[1441]: linux-arm64/helm May 9 23:43:58.710533 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 23:43:58.716524 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:43:58.721381 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 23:43:58.721381 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:43:58.721381 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 23:43:58.724318 extend-filesystems[1417]: Resized filesystem in /dev/vda9 May 9 23:43:58.723939 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:43:58.724144 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:43:58.734512 bash[1469]: Updated "/home/core/.ssh/authorized_keys" May 9 23:43:58.737310 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:43:58.739578 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:43:58.851434 containerd[1443]: time="2025-05-09T23:43:58.851301560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 9 23:43:58.882195 containerd[1443]: time="2025-05-09T23:43:58.882136200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.883804 containerd[1443]: time="2025-05-09T23:43:58.883747000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:43:58.883804 containerd[1443]: time="2025-05-09T23:43:58.883783760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:43:58.883804 containerd[1443]: time="2025-05-09T23:43:58.883800440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:43:58.883976 containerd[1443]: time="2025-05-09T23:43:58.883950760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:43:58.883976 containerd[1443]: time="2025-05-09T23:43:58.883973960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.884035 containerd[1443]: time="2025-05-09T23:43:58.884026680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:43:58.884055 containerd[1443]: time="2025-05-09T23:43:58.884038520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.884215 containerd[1443]: time="2025-05-09T23:43:58.884188800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:43:58.884215 containerd[1443]: time="2025-05-09T23:43:58.884208880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.884260 containerd[1443]: time="2025-05-09T23:43:58.884223440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:43:58.884260 containerd[1443]: time="2025-05-09T23:43:58.884232640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.884314 containerd[1443]: time="2025-05-09T23:43:58.884301560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.884526 containerd[1443]: time="2025-05-09T23:43:58.884509200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:43:58.884620 containerd[1443]: time="2025-05-09T23:43:58.884604440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:43:58.884649 containerd[1443]: time="2025-05-09T23:43:58.884621840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:43:58.884716 containerd[1443]: time="2025-05-09T23:43:58.884700920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:43:58.884765 containerd[1443]: time="2025-05-09T23:43:58.884753200Z" level=info msg="metadata content store policy set" policy=shared May 9 23:43:58.890647 containerd[1443]: time="2025-05-09T23:43:58.890195520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:43:58.890647 containerd[1443]: time="2025-05-09T23:43:58.890255920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:43:58.890647 containerd[1443]: time="2025-05-09T23:43:58.890273000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:43:58.890647 containerd[1443]: time="2025-05-09T23:43:58.890288960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:43:58.890647 containerd[1443]: time="2025-05-09T23:43:58.890303400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:43:58.890647 containerd[1443]: time="2025-05-09T23:43:58.890462280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:43:58.892397 containerd[1443]: time="2025-05-09T23:43:58.892347040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:43:58.892615 containerd[1443]: time="2025-05-09T23:43:58.892581480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:43:58.892615 containerd[1443]: time="2025-05-09T23:43:58.892606720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:43:58.892680 containerd[1443]: time="2025-05-09T23:43:58.892621880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:43:58.892680 containerd[1443]: time="2025-05-09T23:43:58.892635760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892680 containerd[1443]: time="2025-05-09T23:43:58.892648520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892680 containerd[1443]: time="2025-05-09T23:43:58.892660440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892680 containerd[1443]: time="2025-05-09T23:43:58.892673720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892778 containerd[1443]: time="2025-05-09T23:43:58.892686880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892778 containerd[1443]: time="2025-05-09T23:43:58.892760080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892778 containerd[1443]: time="2025-05-09T23:43:58.892776280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892832 containerd[1443]: time="2025-05-09T23:43:58.892787840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:43:58.892832 containerd[1443]: time="2025-05-09T23:43:58.892808120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892832 containerd[1443]: time="2025-05-09T23:43:58.892821480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892888 containerd[1443]: time="2025-05-09T23:43:58.892833240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892888 containerd[1443]: time="2025-05-09T23:43:58.892845360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892888 containerd[1443]: time="2025-05-09T23:43:58.892856520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892945 containerd[1443]: time="2025-05-09T23:43:58.892915480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892945 containerd[1443]: time="2025-05-09T23:43:58.892931040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892980 containerd[1443]: time="2025-05-09T23:43:58.892944160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892980 containerd[1443]: time="2025-05-09T23:43:58.892958000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:43:58.892980 containerd[1443]: time="2025-05-09T23:43:58.892972560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:43:58.893031 containerd[1443]: time="2025-05-09T23:43:58.892984440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:43:58.893031 containerd[1443]: time="2025-05-09T23:43:58.892996840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:43:58.893031 containerd[1443]: time="2025-05-09T23:43:58.893013480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:43:58.893031 containerd[1443]: time="2025-05-09T23:43:58.893028640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:43:58.893131 containerd[1443]: time="2025-05-09T23:43:58.893111000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:43:58.893154 containerd[1443]: time="2025-05-09T23:43:58.893135520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:43:58.893154 containerd[1443]: time="2025-05-09T23:43:58.893146880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:43:58.894254 containerd[1443]: time="2025-05-09T23:43:58.894215120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:43:58.894398 containerd[1443]: time="2025-05-09T23:43:58.894367760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:43:58.894398 containerd[1443]: time="2025-05-09T23:43:58.894391880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:43:58.894445 containerd[1443]: time="2025-05-09T23:43:58.894406280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:43:58.894445 containerd[1443]: time="2025-05-09T23:43:58.894416160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:43:58.894445 containerd[1443]: time="2025-05-09T23:43:58.894429480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:43:58.894445 containerd[1443]: time="2025-05-09T23:43:58.894439360Z" level=info msg="NRI interface is disabled by configuration." May 9 23:43:58.894533 containerd[1443]: time="2025-05-09T23:43:58.894449400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:43:58.894854 containerd[1443]: time="2025-05-09T23:43:58.894796040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:43:58.894854 containerd[1443]: time="2025-05-09T23:43:58.894848480Z" level=info msg="Connect containerd service" May 9 23:43:58.894977 containerd[1443]: time="2025-05-09T23:43:58.894886400Z" level=info msg="using legacy CRI server" May 9 23:43:58.894977 containerd[1443]: time="2025-05-09T23:43:58.894893960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:43:58.895149 containerd[1443]: time="2025-05-09T23:43:58.895120680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:43:58.897895 containerd[1443]: time="2025-05-09T23:43:58.897858200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:43:58.898272 containerd[1443]: time="2025-05-09T23:43:58.898113960Z" level=info msg="Start subscribing containerd event" May 9 23:43:58.898272 containerd[1443]: time="2025-05-09T23:43:58.898170120Z" level=info msg="Start recovering state" May 9 23:43:58.898272 containerd[1443]: time="2025-05-09T23:43:58.898233760Z" level=info msg="Start event monitor" May 9 23:43:58.898363 containerd[1443]: time="2025-05-09T23:43:58.898253240Z" level=info msg="Start snapshots syncer" May 9 23:43:58.898424 containerd[1443]: time="2025-05-09T23:43:58.898409760Z" level=info msg="Start cni network conf syncer for default" May 9 23:43:58.898523 containerd[1443]: time="2025-05-09T23:43:58.898508360Z" level=info msg="Start streaming server" May 9 23:43:58.898991 containerd[1443]: time="2025-05-09T23:43:58.898848120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:43:58.899036 containerd[1443]: time="2025-05-09T23:43:58.899020400Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:43:58.899277 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:43:58.901503 containerd[1443]: time="2025-05-09T23:43:58.900259640Z" level=info msg="containerd successfully booted in 0.050239s" May 9 23:43:59.056959 tar[1441]: linux-arm64/README.md May 9 23:43:59.069258 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:43:59.364970 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:43:59.384454 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:43:59.400863 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:43:59.406725 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:43:59.406955 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:43:59.410912 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:43:59.426569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:43:59.429258 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:43:59.431335 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:43:59.432526 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:43:59.724630 systemd-networkd[1386]: eth0: Gained IPv6LL May 9 23:43:59.727154 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:43:59.728882 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:43:59.739203 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:43:59.741637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:43:59.743721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:43:59.771883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:43:59.773350 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:43:59.774320 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:43:59.777919 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:44:00.293504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:00.295116 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:44:00.297604 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:44:00.299567 systemd[1]: Startup finished in 550ms (kernel) + 5.066s (initrd) + 3.565s (userspace) = 9.182s. May 9 23:44:00.706620 kubelet[1529]: E0509 23:44:00.706454 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:44:00.708849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:44:00.709006 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:44:04.472506 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:44:04.474157 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:60064.service - OpenSSH per-connection server daemon (10.0.0.1:60064). May 9 23:44:04.547011 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 60064 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:04.548824 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:04.559455 systemd-logind[1424]: New session 1 of user core. May 9 23:44:04.560563 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:44:04.574791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:44:04.584515 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:44:04.586933 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:44:04.594292 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:44:04.671411 systemd[1546]: Queued start job for default target default.target. May 9 23:44:04.680430 systemd[1546]: Created slice app.slice - User Application Slice. May 9 23:44:04.680461 systemd[1546]: Reached target paths.target - Paths. May 9 23:44:04.680497 systemd[1546]: Reached target timers.target - Timers. May 9 23:44:04.681747 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:44:04.694050 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:44:04.694170 systemd[1546]: Reached target sockets.target - Sockets. May 9 23:44:04.694183 systemd[1546]: Reached target basic.target - Basic System. May 9 23:44:04.694218 systemd[1546]: Reached target default.target - Main User Target. May 9 23:44:04.694245 systemd[1546]: Startup finished in 94ms. May 9 23:44:04.694603 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:44:04.696626 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:44:04.756286 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:60076.service - OpenSSH per-connection server daemon (10.0.0.1:60076). May 9 23:44:04.799838 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 60076 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:04.801081 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:04.805218 systemd-logind[1424]: New session 2 of user core. May 9 23:44:04.817660 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:44:04.869240 sshd[1559]: Connection closed by 10.0.0.1 port 60076 May 9 23:44:04.869910 sshd-session[1557]: pam_unix(sshd:session): session closed for user core May 9 23:44:04.885104 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:60076.service: Deactivated successfully. May 9 23:44:04.887740 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:44:04.889038 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. May 9 23:44:04.890254 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:60084.service - OpenSSH per-connection server daemon (10.0.0.1:60084). May 9 23:44:04.891559 systemd-logind[1424]: Removed session 2. May 9 23:44:04.935275 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 60084 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:04.936614 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:04.941579 systemd-logind[1424]: New session 3 of user core. May 9 23:44:04.950694 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:44:04.998402 sshd[1566]: Connection closed by 10.0.0.1 port 60084 May 9 23:44:04.998870 sshd-session[1564]: pam_unix(sshd:session): session closed for user core May 9 23:44:05.007914 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:60084.service: Deactivated successfully. May 9 23:44:05.010239 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:44:05.011775 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. May 9 23:44:05.013532 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:60092.service - OpenSSH per-connection server daemon (10.0.0.1:60092). May 9 23:44:05.014598 systemd-logind[1424]: Removed session 3. May 9 23:44:05.056816 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 60092 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:05.058131 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:05.063404 systemd-logind[1424]: New session 4 of user core. May 9 23:44:05.070700 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:44:05.122938 sshd[1573]: Connection closed by 10.0.0.1 port 60092 May 9 23:44:05.123441 sshd-session[1571]: pam_unix(sshd:session): session closed for user core May 9 23:44:05.137053 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:60092.service: Deactivated successfully. May 9 23:44:05.138587 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:44:05.140437 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 9 23:44:05.141164 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:60100.service - OpenSSH per-connection server daemon (10.0.0.1:60100). May 9 23:44:05.143975 systemd-logind[1424]: Removed session 4. May 9 23:44:05.185371 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 60100 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:05.186629 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:05.191184 systemd-logind[1424]: New session 5 of user core. May 9 23:44:05.205712 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:44:05.264953 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:44:05.265273 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:05.283614 sudo[1581]: pam_unix(sudo:session): session closed for user root May 9 23:44:05.288218 sshd[1580]: Connection closed by 10.0.0.1 port 60100 May 9 23:44:05.288047 sshd-session[1578]: pam_unix(sshd:session): session closed for user core May 9 23:44:05.307149 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:60100.service: Deactivated successfully. May 9 23:44:05.309916 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:44:05.311282 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 9 23:44:05.312662 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:60102.service - OpenSSH per-connection server daemon (10.0.0.1:60102). May 9 23:44:05.313363 systemd-logind[1424]: Removed session 5. May 9 23:44:05.357092 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 60102 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:05.358601 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:05.363986 systemd-logind[1424]: New session 6 of user core. May 9 23:44:05.369704 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:44:05.421395 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:44:05.421696 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:05.425204 sudo[1590]: pam_unix(sudo:session): session closed for user root May 9 23:44:05.430297 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 23:44:05.430681 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:05.450830 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:44:05.475618 augenrules[1612]: No rules May 9 23:44:05.476934 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:44:05.477158 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:44:05.478626 sudo[1589]: pam_unix(sudo:session): session closed for user root May 9 23:44:05.480101 sshd[1588]: Connection closed by 10.0.0.1 port 60102 May 9 23:44:05.480678 sshd-session[1586]: pam_unix(sshd:session): session closed for user core May 9 23:44:05.491088 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:60102.service: Deactivated successfully. May 9 23:44:05.493759 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:44:05.495332 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 9 23:44:05.504767 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:60110.service - OpenSSH per-connection server daemon (10.0.0.1:60110). May 9 23:44:05.505892 systemd-logind[1424]: Removed session 6. May 9 23:44:05.552357 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 60110 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:05.553986 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:05.558111 systemd-logind[1424]: New session 7 of user core. May 9 23:44:05.566651 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:44:05.619049 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:44:05.619736 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:44:05.969751 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:44:05.969931 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:44:06.237945 dockerd[1644]: time="2025-05-09T23:44:06.237813501Z" level=info msg="Starting up" May 9 23:44:06.401299 dockerd[1644]: time="2025-05-09T23:44:06.401239202Z" level=info msg="Loading containers: start." May 9 23:44:06.595785 kernel: Initializing XFRM netlink socket May 9 23:44:06.699845 systemd-networkd[1386]: docker0: Link UP May 9 23:44:06.734879 dockerd[1644]: time="2025-05-09T23:44:06.734814733Z" level=info msg="Loading containers: done." May 9 23:44:06.745365 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3796011514-merged.mount: Deactivated successfully. May 9 23:44:06.749954 dockerd[1644]: time="2025-05-09T23:44:06.749567671Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:44:06.749954 dockerd[1644]: time="2025-05-09T23:44:06.749667665Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 9 23:44:06.749954 dockerd[1644]: time="2025-05-09T23:44:06.749771416Z" level=info msg="Daemon has completed initialization" May 9 23:44:06.780749 dockerd[1644]: time="2025-05-09T23:44:06.780687127Z" level=info msg="API listen on /run/docker.sock" May 9 23:44:06.780910 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:44:07.495851 containerd[1443]: time="2025-05-09T23:44:07.495798928Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 23:44:08.193490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738591559.mount: Deactivated successfully. May 9 23:44:09.596732 containerd[1443]: time="2025-05-09T23:44:09.596676282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:09.597193 containerd[1443]: time="2025-05-09T23:44:09.597153351Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 9 23:44:09.597995 containerd[1443]: time="2025-05-09T23:44:09.597960364Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:09.601519 containerd[1443]: time="2025-05-09T23:44:09.601469235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:09.603545 containerd[1443]: time="2025-05-09T23:44:09.603298532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.10745516s" May 9 23:44:09.603545 containerd[1443]: time="2025-05-09T23:44:09.603346894Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 9 23:44:09.604017 containerd[1443]: time="2025-05-09T23:44:09.603991992Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 23:44:10.959271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:44:10.965228 containerd[1443]: time="2025-05-09T23:44:10.964868055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:10.972071 containerd[1443]: time="2025-05-09T23:44:10.965414843Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 9 23:44:10.972071 containerd[1443]: time="2025-05-09T23:44:10.966490993Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:10.972071 containerd[1443]: time="2025-05-09T23:44:10.970847830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:10.972088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:10.973057 containerd[1443]: time="2025-05-09T23:44:10.972261165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.368231722s" May 9 23:44:10.973057 containerd[1443]: time="2025-05-09T23:44:10.972304852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 9 23:44:10.973193 containerd[1443]: time="2025-05-09T23:44:10.973114043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 23:44:11.078122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:11.082606 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:44:11.128823 kubelet[1906]: E0509 23:44:11.128757 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:44:11.132221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:44:11.132365 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:44:12.329357 containerd[1443]: time="2025-05-09T23:44:12.329297060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:12.329918 containerd[1443]: time="2025-05-09T23:44:12.329870214Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 9 23:44:12.330660 containerd[1443]: time="2025-05-09T23:44:12.330619245Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:12.333772 containerd[1443]: time="2025-05-09T23:44:12.333734042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:12.335295 containerd[1443]: time="2025-05-09T23:44:12.335255966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.362109548s" May 9 23:44:12.335334 containerd[1443]: time="2025-05-09T23:44:12.335294179Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 9 23:44:12.335888 containerd[1443]: time="2025-05-09T23:44:12.335806777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 23:44:13.329162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995246216.mount: Deactivated successfully. May 9 23:44:13.545853 containerd[1443]: time="2025-05-09T23:44:13.545806408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:13.546672 containerd[1443]: time="2025-05-09T23:44:13.546287839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 9 23:44:13.547346 containerd[1443]: time="2025-05-09T23:44:13.547103720Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:13.549681 containerd[1443]: time="2025-05-09T23:44:13.549644859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:13.550344 containerd[1443]: time="2025-05-09T23:44:13.550120733Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.214139559s" May 9 23:44:13.550344 containerd[1443]: time="2025-05-09T23:44:13.550147435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 9 23:44:13.550669 containerd[1443]: time="2025-05-09T23:44:13.550577820Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 23:44:14.148005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289176294.mount: Deactivated successfully. May 9 23:44:15.176207 containerd[1443]: time="2025-05-09T23:44:15.176148287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:15.176678 containerd[1443]: time="2025-05-09T23:44:15.176616666Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 9 23:44:15.177559 containerd[1443]: time="2025-05-09T23:44:15.177526761Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:15.181967 containerd[1443]: time="2025-05-09T23:44:15.181926333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:15.183254 containerd[1443]: time="2025-05-09T23:44:15.183204831Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.632592754s" May 9 23:44:15.183254 containerd[1443]: time="2025-05-09T23:44:15.183242527Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 9 23:44:15.183833 containerd[1443]: time="2025-05-09T23:44:15.183704510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 23:44:15.599914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576084301.mount: Deactivated successfully. May 9 23:44:15.603919 containerd[1443]: time="2025-05-09T23:44:15.603880312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:15.604928 containerd[1443]: time="2025-05-09T23:44:15.604884587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 23:44:15.605562 containerd[1443]: time="2025-05-09T23:44:15.605535249Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:15.608509 containerd[1443]: time="2025-05-09T23:44:15.608378581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:15.609197 containerd[1443]: time="2025-05-09T23:44:15.608994225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 425.259095ms" May 9 23:44:15.609197 containerd[1443]: time="2025-05-09T23:44:15.609023646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 23:44:15.609703 containerd[1443]: time="2025-05-09T23:44:15.609504217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 23:44:16.188677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829527142.mount: Deactivated successfully. May 9 23:44:18.840626 containerd[1443]: time="2025-05-09T23:44:18.840572848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:18.841115 containerd[1443]: time="2025-05-09T23:44:18.841060643Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 9 23:44:18.844498 containerd[1443]: time="2025-05-09T23:44:18.841835750Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:18.845766 containerd[1443]: time="2025-05-09T23:44:18.845736791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:18.847322 containerd[1443]: time="2025-05-09T23:44:18.847280808Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.237743852s" May 9 23:44:18.847322 containerd[1443]: time="2025-05-09T23:44:18.847317987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 9 23:44:21.382785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 23:44:21.396113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:21.543026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:21.546647 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:44:21.577650 kubelet[2067]: E0509 23:44:21.577595 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:44:21.580265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:44:21.580536 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:44:24.139609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:24.153762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:24.176676 systemd[1]: Reloading requested from client PID 2083 ('systemctl') (unit session-7.scope)... May 9 23:44:24.176690 systemd[1]: Reloading... May 9 23:44:24.231516 zram_generator::config[2122]: No configuration found. May 9 23:44:24.470942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:44:24.525994 systemd[1]: Reloading finished in 349 ms. May 9 23:44:24.571081 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:44:24.571149 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:44:24.571392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:24.573742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:24.678815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:24.684663 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:44:24.737171 kubelet[2168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:44:24.737171 kubelet[2168]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:44:24.737171 kubelet[2168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:44:24.737547 kubelet[2168]: I0509 23:44:24.737178 2168 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:44:26.674144 kubelet[2168]: I0509 23:44:26.674094 2168 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:44:26.674144 kubelet[2168]: I0509 23:44:26.674131 2168 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:44:26.674553 kubelet[2168]: I0509 23:44:26.674421 2168 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:44:26.733222 kubelet[2168]: E0509 23:44:26.733154 2168 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:26.735054 kubelet[2168]: I0509 23:44:26.734925 2168 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:44:26.743909 kubelet[2168]: E0509 23:44:26.743867 2168 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:44:26.743909 kubelet[2168]: I0509 23:44:26.743904 2168 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:44:26.748130 kubelet[2168]: I0509 23:44:26.748098 2168 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:44:26.748401 kubelet[2168]: I0509 23:44:26.748355 2168 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:44:26.748627 kubelet[2168]: I0509 23:44:26.748391 2168 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:44:26.748716 kubelet[2168]: I0509 23:44:26.748697 2168 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:44:26.748716 kubelet[2168]: I0509 23:44:26.748706 2168 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:44:26.748938 kubelet[2168]: I0509 23:44:26.748912 2168 state_mem.go:36] "Initialized new in-memory state store" May 9 23:44:26.751624 kubelet[2168]: I0509 23:44:26.751574 2168 kubelet.go:446] "Attempting to sync node with API server" May 9 23:44:26.751624 kubelet[2168]: I0509 23:44:26.751613 2168 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:44:26.751696 kubelet[2168]: I0509 23:44:26.751652 2168 kubelet.go:352] "Adding apiserver pod source" May 9 23:44:26.751696 kubelet[2168]: I0509 23:44:26.751663 2168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:44:26.761550 kubelet[2168]: W0509 23:44:26.760517 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:26.761550 kubelet[2168]: E0509 23:44:26.760596 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:26.761550 kubelet[2168]: W0509 23:44:26.760664 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:26.761550 kubelet[2168]: I0509 23:44:26.760698 2168 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:44:26.761550 kubelet[2168]: E0509 23:44:26.760713 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:26.761550 kubelet[2168]: I0509 23:44:26.761354 2168 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:44:26.761550 kubelet[2168]: W0509 23:44:26.761510 2168 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:44:26.762627 kubelet[2168]: I0509 23:44:26.762537 2168 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:44:26.762627 kubelet[2168]: I0509 23:44:26.762594 2168 server.go:1287] "Started kubelet" May 9 23:44:26.763102 kubelet[2168]: I0509 23:44:26.763056 2168 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:44:26.764191 kubelet[2168]: I0509 23:44:26.764165 2168 server.go:490] "Adding debug handlers to kubelet server" May 9 23:44:26.766498 kubelet[2168]: I0509 23:44:26.766059 2168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:44:26.766738 kubelet[2168]: I0509 23:44:26.766706 2168 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:44:26.767425 kubelet[2168]: I0509 23:44:26.767389 2168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:44:26.767825 kubelet[2168]: I0509 23:44:26.767794 2168 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:44:26.767938 kubelet[2168]: E0509 23:44:26.767918 2168 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:44:26.768071 kubelet[2168]: I0509 23:44:26.768057 2168 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:44:26.768385 kubelet[2168]: I0509 23:44:26.768361 2168 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:44:26.768534 kubelet[2168]: I0509 23:44:26.768520 2168 reconciler.go:26] "Reconciler: start to sync state" May 9 23:44:26.769135 kubelet[2168]: W0509 23:44:26.769085 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:26.769255 kubelet[2168]: E0509 23:44:26.769234 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:26.769789 kubelet[2168]: I0509 23:44:26.769762 2168 factory.go:221] Registration of the systemd container factory successfully May 9 23:44:26.769982 kubelet[2168]: I0509 23:44:26.769961 2168 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:44:26.770795 kubelet[2168]: E0509 23:44:26.770752 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" May 9 23:44:26.771028 kubelet[2168]: E0509 23:44:26.770992 2168 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:44:26.771512 kubelet[2168]: I0509 23:44:26.771455 2168 factory.go:221] Registration of the containerd container factory successfully May 9 23:44:26.772747 kubelet[2168]: E0509 23:44:26.772495 2168 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e0082afdcc9d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 23:44:26.762562005 +0000 UTC m=+2.074351230,LastTimestamp:2025-05-09 23:44:26.762562005 +0000 UTC m=+2.074351230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 23:44:26.783948 kubelet[2168]: I0509 23:44:26.783919 2168 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:44:26.784075 kubelet[2168]: I0509 23:44:26.784062 2168 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:44:26.784164 kubelet[2168]: I0509 23:44:26.784153 2168 state_mem.go:36] "Initialized new in-memory state store" May 9 23:44:26.787504 kubelet[2168]: I0509 23:44:26.787426 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:44:26.788825 kubelet[2168]: I0509 23:44:26.788781 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:44:26.788825 kubelet[2168]: I0509 23:44:26.788808 2168 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:44:26.788825 kubelet[2168]: I0509 23:44:26.788829 2168 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:44:26.788940 kubelet[2168]: I0509 23:44:26.788838 2168 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:44:26.788940 kubelet[2168]: E0509 23:44:26.788885 2168 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:44:26.790871 kubelet[2168]: W0509 23:44:26.790821 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:26.791047 kubelet[2168]: E0509 23:44:26.790942 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:26.864883 kubelet[2168]: I0509 23:44:26.864838 2168 policy_none.go:49] "None policy: Start" May 9 23:44:26.864883 kubelet[2168]: I0509 23:44:26.864871 2168 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:44:26.864883 kubelet[2168]: I0509 23:44:26.864885 2168 state_mem.go:35] "Initializing new in-memory state store" May 9 23:44:26.868838 kubelet[2168]: E0509 23:44:26.868767 2168 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:44:26.876118 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:44:26.888356 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:44:26.889009 kubelet[2168]: E0509 23:44:26.888962 2168 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 23:44:26.891589 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:44:26.900564 kubelet[2168]: I0509 23:44:26.900411 2168 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:44:26.900689 kubelet[2168]: I0509 23:44:26.900670 2168 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:44:26.900718 kubelet[2168]: I0509 23:44:26.900683 2168 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:44:26.901206 kubelet[2168]: I0509 23:44:26.901184 2168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:44:26.901793 kubelet[2168]: E0509 23:44:26.901753 2168 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:44:26.901892 kubelet[2168]: E0509 23:44:26.901871 2168 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 23:44:26.971622 kubelet[2168]: E0509 23:44:26.971512 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" May 9 23:44:27.002973 kubelet[2168]: I0509 23:44:27.002928 2168 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:44:27.003376 kubelet[2168]: E0509 23:44:27.003338 2168 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 9 23:44:27.098046 systemd[1]: Created slice kubepods-burstable-podec85c956156220fecdf20c75664f4cb6.slice - libcontainer container kubepods-burstable-podec85c956156220fecdf20c75664f4cb6.slice. May 9 23:44:27.112713 kubelet[2168]: E0509 23:44:27.112391 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:27.114097 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 9 23:44:27.125216 kubelet[2168]: E0509 23:44:27.125006 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:27.127396 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 9 23:44:27.129082 kubelet[2168]: E0509 23:44:27.129002 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:27.170387 kubelet[2168]: I0509 23:44:27.170347 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec85c956156220fecdf20c75664f4cb6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec85c956156220fecdf20c75664f4cb6\") " pod="kube-system/kube-apiserver-localhost" May 9 23:44:27.170387 kubelet[2168]: I0509 23:44:27.170390 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec85c956156220fecdf20c75664f4cb6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec85c956156220fecdf20c75664f4cb6\") " pod="kube-system/kube-apiserver-localhost" May 9 23:44:27.170640 kubelet[2168]: I0509 23:44:27.170411 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:27.170640 kubelet[2168]: I0509 23:44:27.170429 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:27.170640 kubelet[2168]: I0509 23:44:27.170455 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:27.170640 kubelet[2168]: I0509 23:44:27.170498 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:27.170640 kubelet[2168]: I0509 23:44:27.170515 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:27.170746 kubelet[2168]: I0509 23:44:27.170532 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 23:44:27.170746 kubelet[2168]: I0509 23:44:27.170547 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec85c956156220fecdf20c75664f4cb6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec85c956156220fecdf20c75664f4cb6\") " pod="kube-system/kube-apiserver-localhost" May 9 23:44:27.205420 kubelet[2168]: I0509 23:44:27.205395 2168 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:44:27.205794 kubelet[2168]: E0509 23:44:27.205760 2168 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 9 23:44:27.372510 kubelet[2168]: E0509 23:44:27.372431 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" May 9 23:44:27.413717 kubelet[2168]: E0509 23:44:27.413670 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:27.414535 containerd[1443]: time="2025-05-09T23:44:27.414500349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec85c956156220fecdf20c75664f4cb6,Namespace:kube-system,Attempt:0,}" May 9 23:44:27.425710 kubelet[2168]: E0509 23:44:27.425673 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:27.426173 containerd[1443]: time="2025-05-09T23:44:27.426128284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 9 23:44:27.429421 kubelet[2168]: E0509 23:44:27.429355 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:27.429976 containerd[1443]: time="2025-05-09T23:44:27.429933053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 9 23:44:27.607651 kubelet[2168]: I0509 23:44:27.607613 2168 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:44:27.608315 kubelet[2168]: E0509 23:44:27.608284 2168 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 9 23:44:27.764073 kubelet[2168]: W0509 23:44:27.763872 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:27.764073 kubelet[2168]: E0509 23:44:27.763936 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:27.773653 kubelet[2168]: W0509 23:44:27.773588 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:27.773653 kubelet[2168]: E0509 23:44:27.773655 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:27.818398 kubelet[2168]: W0509 23:44:27.818358 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:27.818398 kubelet[2168]: E0509 23:44:27.818401 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:27.920685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111377721.mount: Deactivated successfully. May 9 23:44:27.927696 containerd[1443]: time="2025-05-09T23:44:27.927641377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:44:27.929021 containerd[1443]: time="2025-05-09T23:44:27.928992303Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:44:27.930769 containerd[1443]: time="2025-05-09T23:44:27.930725503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 9 23:44:27.931272 containerd[1443]: time="2025-05-09T23:44:27.931239797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:44:27.932777 containerd[1443]: time="2025-05-09T23:44:27.932738379Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:44:27.934518 containerd[1443]: time="2025-05-09T23:44:27.934060358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:44:27.935905 containerd[1443]: time="2025-05-09T23:44:27.935833300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:44:27.938615 containerd[1443]: time="2025-05-09T23:44:27.938564701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.553882ms" May 9 23:44:27.939022 containerd[1443]: time="2025-05-09T23:44:27.938957128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:44:27.940017 containerd[1443]: time="2025-05-09T23:44:27.939985477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 525.405563ms" May 9 23:44:27.943528 containerd[1443]: time="2025-05-09T23:44:27.943454314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.236149ms" May 9 23:44:28.133500 containerd[1443]: time="2025-05-09T23:44:28.132644665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:44:28.133500 containerd[1443]: time="2025-05-09T23:44:28.132726750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:44:28.133500 containerd[1443]: time="2025-05-09T23:44:28.132742743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:28.133500 containerd[1443]: time="2025-05-09T23:44:28.132823989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:28.134458 containerd[1443]: time="2025-05-09T23:44:28.134199843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:44:28.134458 containerd[1443]: time="2025-05-09T23:44:28.134290125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:44:28.134458 containerd[1443]: time="2025-05-09T23:44:28.134306198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:28.134458 containerd[1443]: time="2025-05-09T23:44:28.134381566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:28.135220 containerd[1443]: time="2025-05-09T23:44:28.135152518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:44:28.135281 containerd[1443]: time="2025-05-09T23:44:28.135248597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:44:28.135307 containerd[1443]: time="2025-05-09T23:44:28.135281184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:28.137723 containerd[1443]: time="2025-05-09T23:44:28.137616750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:28.161707 systemd[1]: Started cri-containerd-023b8afdbab93c44f16b6a5f108bc145051d6e1754861a9aae1c979ffb885a6f.scope - libcontainer container 023b8afdbab93c44f16b6a5f108bc145051d6e1754861a9aae1c979ffb885a6f. May 9 23:44:28.163055 systemd[1]: Started cri-containerd-37f64bb01aa8554814698918d9556a0d1d74fffd1f330ef9887c9dceabe6984c.scope - libcontainer container 37f64bb01aa8554814698918d9556a0d1d74fffd1f330ef9887c9dceabe6984c. May 9 23:44:28.164391 systemd[1]: Started cri-containerd-f1e3e8bdc112816e02f71ec668ee3b7ceef0e8b67c2148a93889b98b367be114.scope - libcontainer container f1e3e8bdc112816e02f71ec668ee3b7ceef0e8b67c2148a93889b98b367be114. May 9 23:44:28.173421 kubelet[2168]: E0509 23:44:28.173345 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" May 9 23:44:28.203024 containerd[1443]: time="2025-05-09T23:44:28.202865918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec85c956156220fecdf20c75664f4cb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"37f64bb01aa8554814698918d9556a0d1d74fffd1f330ef9887c9dceabe6984c\"" May 9 23:44:28.203125 containerd[1443]: time="2025-05-09T23:44:28.203090863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1e3e8bdc112816e02f71ec668ee3b7ceef0e8b67c2148a93889b98b367be114\"" May 9 23:44:28.203807 containerd[1443]: time="2025-05-09T23:44:28.203093662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"023b8afdbab93c44f16b6a5f108bc145051d6e1754861a9aae1c979ffb885a6f\"" May 9 23:44:28.203951 kubelet[2168]: E0509 23:44:28.203924 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:28.204012 kubelet[2168]: E0509 23:44:28.203985 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:28.204120 kubelet[2168]: E0509 23:44:28.204099 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:28.206682 containerd[1443]: time="2025-05-09T23:44:28.206646590Z" level=info msg="CreateContainer within sandbox \"f1e3e8bdc112816e02f71ec668ee3b7ceef0e8b67c2148a93889b98b367be114\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:44:28.207274 containerd[1443]: time="2025-05-09T23:44:28.207054857Z" level=info msg="CreateContainer within sandbox \"37f64bb01aa8554814698918d9556a0d1d74fffd1f330ef9887c9dceabe6984c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:44:28.207411 containerd[1443]: time="2025-05-09T23:44:28.207386236Z" level=info msg="CreateContainer within sandbox \"023b8afdbab93c44f16b6a5f108bc145051d6e1754861a9aae1c979ffb885a6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:44:28.227240 containerd[1443]: time="2025-05-09T23:44:28.227049953Z" level=info msg="CreateContainer within sandbox \"f1e3e8bdc112816e02f71ec668ee3b7ceef0e8b67c2148a93889b98b367be114\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"278f55fa2572820e9f24c0429ca1ffb655ea662cef244204e717c14302cbb728\"" May 9 23:44:28.227793 containerd[1443]: time="2025-05-09T23:44:28.227763609Z" level=info msg="StartContainer for \"278f55fa2572820e9f24c0429ca1ffb655ea662cef244204e717c14302cbb728\"" May 9 23:44:28.231062 containerd[1443]: time="2025-05-09T23:44:28.230946255Z" level=info msg="CreateContainer within sandbox \"023b8afdbab93c44f16b6a5f108bc145051d6e1754861a9aae1c979ffb885a6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d49afffd73cfac4e7bf144b7ee8df14019bcd3883dbf7d50e56e54ea112fd727\"" May 9 23:44:28.231534 containerd[1443]: time="2025-05-09T23:44:28.231465794Z" level=info msg="StartContainer for \"d49afffd73cfac4e7bf144b7ee8df14019bcd3883dbf7d50e56e54ea112fd727\"" May 9 23:44:28.233515 containerd[1443]: time="2025-05-09T23:44:28.232877594Z" level=info msg="CreateContainer within sandbox \"37f64bb01aa8554814698918d9556a0d1d74fffd1f330ef9887c9dceabe6984c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ac624cc485d17db6b6e6539138df7e7a7e9ad197349e032575890632ad670b3\"" May 9 23:44:28.233515 containerd[1443]: time="2025-05-09T23:44:28.233266029Z" level=info msg="StartContainer for \"5ac624cc485d17db6b6e6539138df7e7a7e9ad197349e032575890632ad670b3\"" May 9 23:44:28.252664 systemd[1]: Started cri-containerd-278f55fa2572820e9f24c0429ca1ffb655ea662cef244204e717c14302cbb728.scope - libcontainer container 278f55fa2572820e9f24c0429ca1ffb655ea662cef244204e717c14302cbb728. May 9 23:44:28.267637 systemd[1]: Started cri-containerd-5ac624cc485d17db6b6e6539138df7e7a7e9ad197349e032575890632ad670b3.scope - libcontainer container 5ac624cc485d17db6b6e6539138df7e7a7e9ad197349e032575890632ad670b3. May 9 23:44:28.268620 systemd[1]: Started cri-containerd-d49afffd73cfac4e7bf144b7ee8df14019bcd3883dbf7d50e56e54ea112fd727.scope - libcontainer container d49afffd73cfac4e7bf144b7ee8df14019bcd3883dbf7d50e56e54ea112fd727. May 9 23:44:28.329819 containerd[1443]: time="2025-05-09T23:44:28.329725363Z" level=info msg="StartContainer for \"278f55fa2572820e9f24c0429ca1ffb655ea662cef244204e717c14302cbb728\" returns successfully" May 9 23:44:28.345001 containerd[1443]: time="2025-05-09T23:44:28.344808747Z" level=info msg="StartContainer for \"5ac624cc485d17db6b6e6539138df7e7a7e9ad197349e032575890632ad670b3\" returns successfully" May 9 23:44:28.345001 containerd[1443]: time="2025-05-09T23:44:28.344943010Z" level=info msg="StartContainer for \"d49afffd73cfac4e7bf144b7ee8df14019bcd3883dbf7d50e56e54ea112fd727\" returns successfully" May 9 23:44:28.352102 kubelet[2168]: W0509 23:44:28.352041 2168 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused May 9 23:44:28.352240 kubelet[2168]: E0509 23:44:28.352113 2168 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" May 9 23:44:28.410517 kubelet[2168]: I0509 23:44:28.410175 2168 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:44:28.411599 kubelet[2168]: E0509 23:44:28.411567 2168 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" May 9 23:44:28.800000 kubelet[2168]: E0509 23:44:28.799732 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:28.801596 kubelet[2168]: E0509 23:44:28.800583 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:28.802209 kubelet[2168]: E0509 23:44:28.802188 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:28.802415 kubelet[2168]: E0509 23:44:28.802399 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:28.805055 kubelet[2168]: E0509 23:44:28.804871 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:28.805055 kubelet[2168]: E0509 23:44:28.804972 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:29.807034 kubelet[2168]: E0509 23:44:29.806790 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:29.807034 kubelet[2168]: E0509 23:44:29.806929 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:29.808582 kubelet[2168]: E0509 23:44:29.808386 2168 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:44:29.808582 kubelet[2168]: E0509 23:44:29.808526 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:29.859509 kubelet[2168]: E0509 23:44:29.859422 2168 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 23:44:30.015703 kubelet[2168]: I0509 23:44:30.013305 2168 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:44:30.021094 kubelet[2168]: I0509 23:44:30.021035 2168 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 23:44:30.021094 kubelet[2168]: E0509 23:44:30.021077 2168 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 23:44:30.071520 kubelet[2168]: I0509 23:44:30.070743 2168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:44:30.077854 kubelet[2168]: E0509 23:44:30.077812 2168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 9 23:44:30.078168 kubelet[2168]: I0509 23:44:30.078020 2168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 23:44:30.080092 kubelet[2168]: E0509 23:44:30.079909 2168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 9 23:44:30.080092 kubelet[2168]: I0509 23:44:30.079932 2168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 23:44:30.081335 kubelet[2168]: E0509 23:44:30.081300 2168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 9 23:44:30.754794 kubelet[2168]: I0509 23:44:30.754756 2168 apiserver.go:52] "Watching apiserver" May 9 23:44:30.768661 kubelet[2168]: I0509 23:44:30.768626 2168 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:44:32.189716 systemd[1]: Reloading requested from client PID 2450 ('systemctl') (unit session-7.scope)... May 9 23:44:32.189733 systemd[1]: Reloading... May 9 23:44:32.252520 zram_generator::config[2492]: No configuration found. May 9 23:44:32.342415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:44:32.415791 systemd[1]: Reloading finished in 225 ms. May 9 23:44:32.460880 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:32.473507 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:44:32.473786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:32.473840 systemd[1]: kubelet.service: Consumed 2.520s CPU time, 127.3M memory peak, 0B memory swap peak. May 9 23:44:32.486779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:44:32.590543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:44:32.598053 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:44:32.638608 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:44:32.638608 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:44:32.638608 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:44:32.639000 kubelet[2531]: I0509 23:44:32.638698 2531 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:44:32.646411 kubelet[2531]: I0509 23:44:32.646374 2531 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:44:32.646411 kubelet[2531]: I0509 23:44:32.646405 2531 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:44:32.646709 kubelet[2531]: I0509 23:44:32.646683 2531 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:44:32.648043 kubelet[2531]: I0509 23:44:32.648012 2531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:44:32.650548 kubelet[2531]: I0509 23:44:32.650515 2531 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:44:32.653596 kubelet[2531]: E0509 23:44:32.653554 2531 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:44:32.653596 kubelet[2531]: I0509 23:44:32.653592 2531 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:44:32.657316 kubelet[2531]: I0509 23:44:32.657280 2531 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:44:32.657601 kubelet[2531]: I0509 23:44:32.657510 2531 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:44:32.658014 kubelet[2531]: I0509 23:44:32.657550 2531 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:44:32.658108 kubelet[2531]: I0509 23:44:32.658029 2531 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:44:32.658108 kubelet[2531]: I0509 23:44:32.658040 2531 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:44:32.658108 kubelet[2531]: I0509 23:44:32.658088 2531 state_mem.go:36] "Initialized new in-memory state store" May 9 23:44:32.658244 kubelet[2531]: I0509 23:44:32.658233 2531 kubelet.go:446] "Attempting to sync node with API server" May 9 23:44:32.658276 kubelet[2531]: I0509 23:44:32.658250 2531 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:44:32.658276 kubelet[2531]: I0509 23:44:32.658269 2531 kubelet.go:352] "Adding apiserver pod source" May 9 23:44:32.658612 kubelet[2531]: I0509 23:44:32.658278 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:44:32.659162 kubelet[2531]: I0509 23:44:32.659143 2531 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 9 23:44:32.659734 kubelet[2531]: I0509 23:44:32.659697 2531 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:44:32.660375 kubelet[2531]: I0509 23:44:32.660345 2531 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:44:32.660497 kubelet[2531]: I0509 23:44:32.660480 2531 server.go:1287] "Started kubelet" May 9 23:44:32.660755 kubelet[2531]: I0509 23:44:32.660712 2531 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:44:32.661003 kubelet[2531]: I0509 23:44:32.660950 2531 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:44:32.661349 kubelet[2531]: I0509 23:44:32.661335 2531 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:44:32.661673 kubelet[2531]: I0509 23:44:32.661645 2531 server.go:490] "Adding debug handlers to kubelet server" May 9 23:44:32.663868 kubelet[2531]: I0509 23:44:32.663836 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:44:32.664045 kubelet[2531]: E0509 23:44:32.663960 2531 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:44:32.664084 kubelet[2531]: I0509 23:44:32.663969 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:44:32.664324 kubelet[2531]: I0509 23:44:32.664281 2531 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:44:32.664946 kubelet[2531]: I0509 23:44:32.664917 2531 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:44:32.665234 kubelet[2531]: I0509 23:44:32.665043 2531 reconciler.go:26] "Reconciler: start to sync state" May 9 23:44:32.665293 kubelet[2531]: E0509 23:44:32.665265 2531 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:44:32.665417 kubelet[2531]: I0509 23:44:32.665400 2531 factory.go:221] Registration of the systemd container factory successfully May 9 23:44:32.665578 kubelet[2531]: I0509 23:44:32.665556 2531 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:44:32.670732 kubelet[2531]: I0509 23:44:32.670707 2531 factory.go:221] Registration of the containerd container factory successfully May 9 23:44:32.691180 kubelet[2531]: I0509 23:44:32.691047 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:44:32.692370 kubelet[2531]: I0509 23:44:32.692345 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:44:32.692522 kubelet[2531]: I0509 23:44:32.692508 2531 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:44:32.692846 kubelet[2531]: I0509 23:44:32.692579 2531 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:44:32.692846 kubelet[2531]: I0509 23:44:32.692591 2531 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:44:32.692846 kubelet[2531]: E0509 23:44:32.692634 2531 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:44:32.715830 kubelet[2531]: I0509 23:44:32.715724 2531 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:44:32.715830 kubelet[2531]: I0509 23:44:32.715746 2531 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:44:32.715830 kubelet[2531]: I0509 23:44:32.715770 2531 state_mem.go:36] "Initialized new in-memory state store" May 9 23:44:32.715978 kubelet[2531]: I0509 23:44:32.715922 2531 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:44:32.715978 kubelet[2531]: I0509 23:44:32.715933 2531 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:44:32.715978 kubelet[2531]: I0509 23:44:32.715950 2531 policy_none.go:49] "None policy: Start" May 9 23:44:32.715978 kubelet[2531]: I0509 23:44:32.715958 2531 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:44:32.715978 kubelet[2531]: I0509 23:44:32.715967 2531 state_mem.go:35] "Initializing new in-memory state store" May 9 23:44:32.716079 kubelet[2531]: I0509 23:44:32.716056 2531 state_mem.go:75] "Updated machine memory state" May 9 23:44:32.720182 kubelet[2531]: I0509 23:44:32.720132 2531 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:44:32.720342 kubelet[2531]: I0509 23:44:32.720300 2531 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:44:32.720374 kubelet[2531]: I0509 23:44:32.720344 2531 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:44:32.721036 kubelet[2531]: I0509 23:44:32.720622 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:44:32.721450 kubelet[2531]: E0509 23:44:32.721426 2531 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:44:32.794129 kubelet[2531]: I0509 23:44:32.793823 2531 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:44:32.796332 kubelet[2531]: I0509 23:44:32.793824 2531 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 23:44:32.796332 kubelet[2531]: I0509 23:44:32.794050 2531 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 23:44:32.824356 kubelet[2531]: I0509 23:44:32.824323 2531 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:44:32.830267 kubelet[2531]: I0509 23:44:32.830229 2531 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 9 23:44:32.830400 kubelet[2531]: I0509 23:44:32.830352 2531 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 23:44:32.866949 kubelet[2531]: I0509 23:44:32.866777 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:32.866949 kubelet[2531]: I0509 23:44:32.866821 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec85c956156220fecdf20c75664f4cb6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec85c956156220fecdf20c75664f4cb6\") " pod="kube-system/kube-apiserver-localhost" May 9 23:44:32.866949 kubelet[2531]: I0509 23:44:32.866843 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:32.866949 kubelet[2531]: I0509 23:44:32.866860 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:32.866949 kubelet[2531]: I0509 23:44:32.866887 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:32.867244 kubelet[2531]: I0509 23:44:32.866905 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:44:32.867244 kubelet[2531]: I0509 23:44:32.866920 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 23:44:32.867244 kubelet[2531]: I0509 23:44:32.866940 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec85c956156220fecdf20c75664f4cb6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec85c956156220fecdf20c75664f4cb6\") " pod="kube-system/kube-apiserver-localhost" May 9 23:44:32.867244 kubelet[2531]: I0509 23:44:32.866955 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec85c956156220fecdf20c75664f4cb6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec85c956156220fecdf20c75664f4cb6\") " pod="kube-system/kube-apiserver-localhost" May 9 23:44:33.101172 kubelet[2531]: E0509 23:44:33.101123 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:33.102297 kubelet[2531]: E0509 23:44:33.102259 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:33.102297 kubelet[2531]: E0509 23:44:33.102271 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:33.190076 sudo[2568]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 23:44:33.190355 sudo[2568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 23:44:33.631935 sudo[2568]: pam_unix(sudo:session): session closed for user root May 9 23:44:33.662971 kubelet[2531]: I0509 23:44:33.659286 2531 apiserver.go:52] "Watching apiserver" May 9 23:44:33.666028 kubelet[2531]: I0509 23:44:33.665989 2531 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:44:33.691921 kubelet[2531]: I0509 23:44:33.691561 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6915234030000001 podStartE2EDuration="1.691523403s" podCreationTimestamp="2025-05-09 23:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:44:33.691445111 +0000 UTC m=+1.089690793" watchObservedRunningTime="2025-05-09 23:44:33.691523403 +0000 UTC m=+1.089769085" May 9 23:44:33.704995 kubelet[2531]: I0509 23:44:33.704965 2531 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:44:33.706220 kubelet[2531]: E0509 23:44:33.705092 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:33.706754 kubelet[2531]: E0509 23:44:33.705722 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:33.706754 kubelet[2531]: I0509 23:44:33.706232 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.706190361 podStartE2EDuration="1.706190361s" podCreationTimestamp="2025-05-09 23:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:44:33.698692122 +0000 UTC m=+1.096937804" watchObservedRunningTime="2025-05-09 23:44:33.706190361 +0000 UTC m=+1.104436043" May 9 23:44:33.706754 kubelet[2531]: I0509 23:44:33.706632 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.706620525 podStartE2EDuration="1.706620525s" podCreationTimestamp="2025-05-09 23:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:44:33.706602771 +0000 UTC m=+1.104848453" watchObservedRunningTime="2025-05-09 23:44:33.706620525 +0000 UTC m=+1.104866207" May 9 23:44:33.714510 kubelet[2531]: E0509 23:44:33.714441 2531 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 23:44:33.714659 kubelet[2531]: E0509 23:44:33.714642 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:34.706828 kubelet[2531]: E0509 23:44:34.706786 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:34.707162 kubelet[2531]: E0509 23:44:34.707118 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:35.709929 kubelet[2531]: E0509 23:44:35.709608 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:35.710566 kubelet[2531]: E0509 23:44:35.710188 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:36.135002 sudo[1623]: pam_unix(sudo:session): session closed for user root May 9 23:44:36.136248 sshd[1622]: Connection closed by 10.0.0.1 port 60110 May 9 23:44:36.136965 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 9 23:44:36.141631 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:60110.service: Deactivated successfully. May 9 23:44:36.143469 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:44:36.143774 systemd[1]: session-7.scope: Consumed 8.154s CPU time, 159.0M memory peak, 0B memory swap peak. May 9 23:44:36.144354 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 9 23:44:36.145600 systemd-logind[1424]: Removed session 7. May 9 23:44:37.627489 kubelet[2531]: E0509 23:44:37.627447 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:38.582971 kubelet[2531]: I0509 23:44:38.582926 2531 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:44:38.586200 containerd[1443]: time="2025-05-09T23:44:38.583410152Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:44:38.586538 kubelet[2531]: I0509 23:44:38.583585 2531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:44:39.966146 systemd[1]: Created slice kubepods-besteffort-podb1797618_7fac_4de1_abf5_cf6bde90a8b9.slice - libcontainer container kubepods-besteffort-podb1797618_7fac_4de1_abf5_cf6bde90a8b9.slice. May 9 23:44:39.978379 systemd[1]: Created slice kubepods-burstable-poda913bef2_8424_4e35_8911_c9845b5db6fd.slice - libcontainer container kubepods-burstable-poda913bef2_8424_4e35_8911_c9845b5db6fd.slice. May 9 23:44:39.993608 systemd[1]: Created slice kubepods-besteffort-pod4e3a8070_e3d9_4aef_b100_988898fc96be.slice - libcontainer container kubepods-besteffort-pod4e3a8070_e3d9_4aef_b100_988898fc96be.slice. May 9 23:44:40.010182 kubelet[2531]: I0509 23:44:40.010131 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-config-path\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010182 kubelet[2531]: I0509 23:44:40.010177 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e3a8070-e3d9-4aef-b100-988898fc96be-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gvkbk\" (UID: \"4e3a8070-e3d9-4aef-b100-988898fc96be\") " pod="kube-system/cilium-operator-6c4d7847fc-gvkbk" May 9 23:44:40.010587 kubelet[2531]: I0509 23:44:40.010199 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-run\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010587 kubelet[2531]: I0509 23:44:40.010215 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-bpf-maps\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010587 kubelet[2531]: I0509 23:44:40.010245 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1797618-7fac-4de1-abf5-cf6bde90a8b9-lib-modules\") pod \"kube-proxy-jqcbm\" (UID: \"b1797618-7fac-4de1-abf5-cf6bde90a8b9\") " pod="kube-system/kube-proxy-jqcbm" May 9 23:44:40.010587 kubelet[2531]: I0509 23:44:40.010261 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-etc-cni-netd\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010587 kubelet[2531]: I0509 23:44:40.010298 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-net\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010587 kubelet[2531]: I0509 23:44:40.010331 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-hubble-tls\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010733 kubelet[2531]: I0509 23:44:40.010349 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgck\" (UniqueName: \"kubernetes.io/projected/4e3a8070-e3d9-4aef-b100-988898fc96be-kube-api-access-4jgck\") pod \"cilium-operator-6c4d7847fc-gvkbk\" (UID: \"4e3a8070-e3d9-4aef-b100-988898fc96be\") " pod="kube-system/cilium-operator-6c4d7847fc-gvkbk" May 9 23:44:40.010733 kubelet[2531]: I0509 23:44:40.010369 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1797618-7fac-4de1-abf5-cf6bde90a8b9-kube-proxy\") pod \"kube-proxy-jqcbm\" (UID: \"b1797618-7fac-4de1-abf5-cf6bde90a8b9\") " pod="kube-system/kube-proxy-jqcbm" May 9 23:44:40.010733 kubelet[2531]: I0509 23:44:40.010385 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jsr\" (UniqueName: \"kubernetes.io/projected/b1797618-7fac-4de1-abf5-cf6bde90a8b9-kube-api-access-62jsr\") pod \"kube-proxy-jqcbm\" (UID: \"b1797618-7fac-4de1-abf5-cf6bde90a8b9\") " pod="kube-system/kube-proxy-jqcbm" May 9 23:44:40.010733 kubelet[2531]: I0509 23:44:40.010401 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-hostproc\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010733 kubelet[2531]: I0509 23:44:40.010417 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-lib-modules\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010835 kubelet[2531]: I0509 23:44:40.010432 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a913bef2-8424-4e35-8911-c9845b5db6fd-clustermesh-secrets\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010835 kubelet[2531]: I0509 23:44:40.010448 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-kernel\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010835 kubelet[2531]: I0509 23:44:40.010462 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852sq\" (UniqueName: \"kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-kube-api-access-852sq\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010835 kubelet[2531]: I0509 23:44:40.010495 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1797618-7fac-4de1-abf5-cf6bde90a8b9-xtables-lock\") pod \"kube-proxy-jqcbm\" (UID: \"b1797618-7fac-4de1-abf5-cf6bde90a8b9\") " pod="kube-system/kube-proxy-jqcbm" May 9 23:44:40.010835 kubelet[2531]: I0509 23:44:40.010510 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-cgroup\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010936 kubelet[2531]: I0509 23:44:40.010528 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cni-path\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.010936 kubelet[2531]: I0509 23:44:40.010546 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-xtables-lock\") pod \"cilium-5sw4m\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " pod="kube-system/cilium-5sw4m" May 9 23:44:40.272785 kubelet[2531]: E0509 23:44:40.272501 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:40.275983 containerd[1443]: time="2025-05-09T23:44:40.275382655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqcbm,Uid:b1797618-7fac-4de1-abf5-cf6bde90a8b9,Namespace:kube-system,Attempt:0,}" May 9 23:44:40.281343 kubelet[2531]: E0509 23:44:40.281313 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:40.281857 containerd[1443]: time="2025-05-09T23:44:40.281812867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5sw4m,Uid:a913bef2-8424-4e35-8911-c9845b5db6fd,Namespace:kube-system,Attempt:0,}" May 9 23:44:40.300400 kubelet[2531]: E0509 23:44:40.300360 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:40.300986 containerd[1443]: time="2025-05-09T23:44:40.300945309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gvkbk,Uid:4e3a8070-e3d9-4aef-b100-988898fc96be,Namespace:kube-system,Attempt:0,}" May 9 23:44:40.440284 containerd[1443]: time="2025-05-09T23:44:40.438923301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:44:40.440284 containerd[1443]: time="2025-05-09T23:44:40.438991162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:44:40.440284 containerd[1443]: time="2025-05-09T23:44:40.439015115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:40.440284 containerd[1443]: time="2025-05-09T23:44:40.439086414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:40.462345 containerd[1443]: time="2025-05-09T23:44:40.461740352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:44:40.462345 containerd[1443]: time="2025-05-09T23:44:40.461794696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:44:40.462345 containerd[1443]: time="2025-05-09T23:44:40.461805933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:40.462345 containerd[1443]: time="2025-05-09T23:44:40.461886990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:40.469024 containerd[1443]: time="2025-05-09T23:44:40.468740878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:44:40.469024 containerd[1443]: time="2025-05-09T23:44:40.468810418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:44:40.469024 containerd[1443]: time="2025-05-09T23:44:40.468821855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:40.469024 containerd[1443]: time="2025-05-09T23:44:40.468894074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:44:40.479906 systemd[1]: Started cri-containerd-4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a.scope - libcontainer container 4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a. May 9 23:44:40.486386 systemd[1]: Started cri-containerd-37b1b938acb4fa7aa50f3e5b11278f3a4ef0cd1b4a427e8156b02849ca6df01b.scope - libcontainer container 37b1b938acb4fa7aa50f3e5b11278f3a4ef0cd1b4a427e8156b02849ca6df01b. May 9 23:44:40.492318 systemd[1]: Started cri-containerd-ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f.scope - libcontainer container ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f. May 9 23:44:40.512380 containerd[1443]: time="2025-05-09T23:44:40.512331334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5sw4m,Uid:a913bef2-8424-4e35-8911-c9845b5db6fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\"" May 9 23:44:40.513485 kubelet[2531]: E0509 23:44:40.513019 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:40.519547 containerd[1443]: time="2025-05-09T23:44:40.519483536Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:44:40.526187 containerd[1443]: time="2025-05-09T23:44:40.526034792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqcbm,Uid:b1797618-7fac-4de1-abf5-cf6bde90a8b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b1b938acb4fa7aa50f3e5b11278f3a4ef0cd1b4a427e8156b02849ca6df01b\"" May 9 23:44:40.526946 kubelet[2531]: E0509 23:44:40.526622 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:40.532170 containerd[1443]: time="2025-05-09T23:44:40.531906806Z" level=info msg="CreateContainer within sandbox \"37b1b938acb4fa7aa50f3e5b11278f3a4ef0cd1b4a427e8156b02849ca6df01b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:44:40.547313 containerd[1443]: time="2025-05-09T23:44:40.547238912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gvkbk,Uid:4e3a8070-e3d9-4aef-b100-988898fc96be,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f\"" May 9 23:44:40.548594 kubelet[2531]: E0509 23:44:40.548569 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:40.592418 containerd[1443]: time="2025-05-09T23:44:40.592319295Z" level=info msg="CreateContainer within sandbox \"37b1b938acb4fa7aa50f3e5b11278f3a4ef0cd1b4a427e8156b02849ca6df01b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d132a9ecead07deec42efc779637b35b3c395ed47a701072fea5ce827bd2d4b\"" May 9 23:44:40.593133 containerd[1443]: time="2025-05-09T23:44:40.592952391Z" level=info msg="StartContainer for \"6d132a9ecead07deec42efc779637b35b3c395ed47a701072fea5ce827bd2d4b\"" May 9 23:44:40.633711 systemd[1]: Started cri-containerd-6d132a9ecead07deec42efc779637b35b3c395ed47a701072fea5ce827bd2d4b.scope - libcontainer container 6d132a9ecead07deec42efc779637b35b3c395ed47a701072fea5ce827bd2d4b. May 9 23:44:40.667709 containerd[1443]: time="2025-05-09T23:44:40.667648089Z" level=info msg="StartContainer for \"6d132a9ecead07deec42efc779637b35b3c395ed47a701072fea5ce827bd2d4b\" returns successfully" May 9 23:44:40.720822 kubelet[2531]: E0509 23:44:40.720718 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:41.355706 kubelet[2531]: E0509 23:44:41.355668 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:41.373697 kubelet[2531]: I0509 23:44:41.373594 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqcbm" podStartSLOduration=2.373575812 podStartE2EDuration="2.373575812s" podCreationTimestamp="2025-05-09 23:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:44:40.733087876 +0000 UTC m=+8.131333558" watchObservedRunningTime="2025-05-09 23:44:41.373575812 +0000 UTC m=+8.771821494" May 9 23:44:41.729752 kubelet[2531]: E0509 23:44:41.729542 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:44.120529 update_engine[1428]: I20250509 23:44:44.119952 1428 update_attempter.cc:509] Updating boot flags... May 9 23:44:44.232562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2919) May 9 23:44:44.289550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2920) May 9 23:44:45.599516 kubelet[2531]: E0509 23:44:45.599244 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:47.474016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044525387.mount: Deactivated successfully. May 9 23:44:47.644169 kubelet[2531]: E0509 23:44:47.643453 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:47.739060 kubelet[2531]: E0509 23:44:47.738835 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:48.868668 containerd[1443]: time="2025-05-09T23:44:48.868384998Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:48.875565 containerd[1443]: time="2025-05-09T23:44:48.875494756Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:44:48.876586 containerd[1443]: time="2025-05-09T23:44:48.876512686Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:48.878821 containerd[1443]: time="2025-05-09T23:44:48.878504477Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.35895432s" May 9 23:44:48.878821 containerd[1443]: time="2025-05-09T23:44:48.878543269Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:44:48.880703 containerd[1443]: time="2025-05-09T23:44:48.880661151Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:44:48.881801 containerd[1443]: time="2025-05-09T23:44:48.881767862Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:44:48.916256 containerd[1443]: time="2025-05-09T23:44:48.916198823Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\"" May 9 23:44:48.916804 containerd[1443]: time="2025-05-09T23:44:48.916772574Z" level=info msg="StartContainer for \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\"" May 9 23:44:48.942656 systemd[1]: Started cri-containerd-a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256.scope - libcontainer container a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256. May 9 23:44:48.975193 containerd[1443]: time="2025-05-09T23:44:48.973575013Z" level=info msg="StartContainer for \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\" returns successfully" May 9 23:44:49.014525 systemd[1]: cri-containerd-a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256.scope: Deactivated successfully. May 9 23:44:49.152410 containerd[1443]: time="2025-05-09T23:44:49.136578871Z" level=info msg="shim disconnected" id=a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256 namespace=k8s.io May 9 23:44:49.152410 containerd[1443]: time="2025-05-09T23:44:49.152330152Z" level=warning msg="cleaning up after shim disconnected" id=a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256 namespace=k8s.io May 9 23:44:49.152410 containerd[1443]: time="2025-05-09T23:44:49.152345229Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:44:49.746657 kubelet[2531]: E0509 23:44:49.746072 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:49.752501 containerd[1443]: time="2025-05-09T23:44:49.752058626Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:44:49.763815 containerd[1443]: time="2025-05-09T23:44:49.763757592Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\"" May 9 23:44:49.766234 containerd[1443]: time="2025-05-09T23:44:49.764521586Z" level=info msg="StartContainer for \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\"" May 9 23:44:49.794665 systemd[1]: Started cri-containerd-e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9.scope - libcontainer container e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9. May 9 23:44:49.820467 containerd[1443]: time="2025-05-09T23:44:49.820422022Z" level=info msg="StartContainer for \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\" returns successfully" May 9 23:44:49.853364 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:44:49.853599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:44:49.853676 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:44:49.859967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:44:49.860177 systemd[1]: cri-containerd-e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9.scope: Deactivated successfully. May 9 23:44:49.874067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:44:49.882616 containerd[1443]: time="2025-05-09T23:44:49.882555778Z" level=info msg="shim disconnected" id=e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9 namespace=k8s.io May 9 23:44:49.882616 containerd[1443]: time="2025-05-09T23:44:49.882612565Z" level=warning msg="cleaning up after shim disconnected" id=e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9 namespace=k8s.io May 9 23:44:49.882616 containerd[1443]: time="2025-05-09T23:44:49.882621603Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:44:49.892208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256-rootfs.mount: Deactivated successfully. May 9 23:44:50.379013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669216175.mount: Deactivated successfully. May 9 23:44:50.726444 containerd[1443]: time="2025-05-09T23:44:50.726324794Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:50.727121 containerd[1443]: time="2025-05-09T23:44:50.726847844Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:44:50.727639 containerd[1443]: time="2025-05-09T23:44:50.727609842Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:44:50.729667 containerd[1443]: time="2025-05-09T23:44:50.729552032Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.848849009s" May 9 23:44:50.729667 containerd[1443]: time="2025-05-09T23:44:50.729585385Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:44:50.734448 containerd[1443]: time="2025-05-09T23:44:50.734390808Z" level=info msg="CreateContainer within sandbox \"ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:44:50.748904 kubelet[2531]: E0509 23:44:50.748791 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:50.751095 containerd[1443]: time="2025-05-09T23:44:50.751056564Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:44:50.751132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683440676.mount: Deactivated successfully. May 9 23:44:50.753777 containerd[1443]: time="2025-05-09T23:44:50.753728559Z" level=info msg="CreateContainer within sandbox \"ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\"" May 9 23:44:50.754520 containerd[1443]: time="2025-05-09T23:44:50.754136352Z" level=info msg="StartContainer for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\"" May 9 23:44:50.789753 systemd[1]: Started cri-containerd-7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba.scope - libcontainer container 7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba. May 9 23:44:50.827715 containerd[1443]: time="2025-05-09T23:44:50.827637728Z" level=info msg="StartContainer for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" returns successfully" May 9 23:44:50.838970 containerd[1443]: time="2025-05-09T23:44:50.838917462Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\"" May 9 23:44:50.839428 containerd[1443]: time="2025-05-09T23:44:50.839396401Z" level=info msg="StartContainer for \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\"" May 9 23:44:50.873712 systemd[1]: Started cri-containerd-3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b.scope - libcontainer container 3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b. May 9 23:44:50.911441 containerd[1443]: time="2025-05-09T23:44:50.911395694Z" level=info msg="StartContainer for \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\" returns successfully" May 9 23:44:50.915662 systemd[1]: cri-containerd-3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b.scope: Deactivated successfully. May 9 23:44:50.938033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b-rootfs.mount: Deactivated successfully. May 9 23:44:50.945303 containerd[1443]: time="2025-05-09T23:44:50.945241177Z" level=info msg="shim disconnected" id=3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b namespace=k8s.io May 9 23:44:50.945303 containerd[1443]: time="2025-05-09T23:44:50.945298245Z" level=warning msg="cleaning up after shim disconnected" id=3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b namespace=k8s.io May 9 23:44:50.945303 containerd[1443]: time="2025-05-09T23:44:50.945308322Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:44:51.752290 kubelet[2531]: E0509 23:44:51.752256 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:51.755121 containerd[1443]: time="2025-05-09T23:44:51.755071171Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:44:51.761677 kubelet[2531]: E0509 23:44:51.761323 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:51.785146 containerd[1443]: time="2025-05-09T23:44:51.784938852Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\"" May 9 23:44:51.786571 containerd[1443]: time="2025-05-09T23:44:51.785683219Z" level=info msg="StartContainer for \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\"" May 9 23:44:51.828723 systemd[1]: Started cri-containerd-536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac.scope - libcontainer container 536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac. May 9 23:44:51.854048 systemd[1]: cri-containerd-536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac.scope: Deactivated successfully. May 9 23:44:51.858400 containerd[1443]: time="2025-05-09T23:44:51.858352811Z" level=info msg="StartContainer for \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\" returns successfully" May 9 23:44:51.878781 containerd[1443]: time="2025-05-09T23:44:51.878646773Z" level=info msg="shim disconnected" id=536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac namespace=k8s.io May 9 23:44:51.878950 containerd[1443]: time="2025-05-09T23:44:51.878788704Z" level=warning msg="cleaning up after shim disconnected" id=536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac namespace=k8s.io May 9 23:44:51.878950 containerd[1443]: time="2025-05-09T23:44:51.878800422Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:44:51.891942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac-rootfs.mount: Deactivated successfully. May 9 23:44:52.782963 kubelet[2531]: E0509 23:44:52.782632 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:52.782963 kubelet[2531]: E0509 23:44:52.782784 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:52.784880 containerd[1443]: time="2025-05-09T23:44:52.784843282Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:44:52.798248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408103265.mount: Deactivated successfully. May 9 23:44:52.800622 kubelet[2531]: I0509 23:44:52.800532 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gvkbk" podStartSLOduration=3.619383053 podStartE2EDuration="13.800515971s" podCreationTimestamp="2025-05-09 23:44:39 +0000 UTC" firstStartedPulling="2025-05-09 23:44:40.549129323 +0000 UTC m=+7.947375005" lastFinishedPulling="2025-05-09 23:44:50.730262241 +0000 UTC m=+18.128507923" observedRunningTime="2025-05-09 23:44:51.795016107 +0000 UTC m=+19.193261789" watchObservedRunningTime="2025-05-09 23:44:52.800515971 +0000 UTC m=+20.198761653" May 9 23:44:52.802853 containerd[1443]: time="2025-05-09T23:44:52.801188837Z" level=info msg="CreateContainer within sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\"" May 9 23:44:52.802853 containerd[1443]: time="2025-05-09T23:44:52.802352247Z" level=info msg="StartContainer for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\"" May 9 23:44:52.825651 systemd[1]: Started cri-containerd-e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304.scope - libcontainer container e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304. May 9 23:44:52.851813 containerd[1443]: time="2025-05-09T23:44:52.851769959Z" level=info msg="StartContainer for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" returns successfully" May 9 23:44:53.056751 kubelet[2531]: I0509 23:44:53.056629 2531 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 23:44:53.100952 kubelet[2531]: I0509 23:44:53.100786 2531 status_manager.go:890] "Failed to get status for pod" podUID="e2d85e10-cd93-420a-bcb5-8ae937335351" pod="kube-system/coredns-668d6bf9bc-qmhl5" err="pods \"coredns-668d6bf9bc-qmhl5\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 9 23:44:53.100952 kubelet[2531]: W0509 23:44:53.100903 2531 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 9 23:44:53.100952 kubelet[2531]: E0509 23:44:53.100943 2531 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 9 23:44:53.109483 systemd[1]: Created slice kubepods-burstable-pode2d85e10_cd93_420a_bcb5_8ae937335351.slice - libcontainer container kubepods-burstable-pode2d85e10_cd93_420a_bcb5_8ae937335351.slice. May 9 23:44:53.119411 systemd[1]: Created slice kubepods-burstable-pod01314301_bee1_460d_8c33_93281f63a3ac.slice - libcontainer container kubepods-burstable-pod01314301_bee1_460d_8c33_93281f63a3ac.slice. May 9 23:44:53.262404 kubelet[2531]: I0509 23:44:53.262351 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d85e10-cd93-420a-bcb5-8ae937335351-config-volume\") pod \"coredns-668d6bf9bc-qmhl5\" (UID: \"e2d85e10-cd93-420a-bcb5-8ae937335351\") " pod="kube-system/coredns-668d6bf9bc-qmhl5" May 9 23:44:53.262404 kubelet[2531]: I0509 23:44:53.262401 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg7lw\" (UniqueName: \"kubernetes.io/projected/e2d85e10-cd93-420a-bcb5-8ae937335351-kube-api-access-qg7lw\") pod \"coredns-668d6bf9bc-qmhl5\" (UID: \"e2d85e10-cd93-420a-bcb5-8ae937335351\") " pod="kube-system/coredns-668d6bf9bc-qmhl5" May 9 23:44:53.262605 kubelet[2531]: I0509 23:44:53.262426 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shthv\" (UniqueName: \"kubernetes.io/projected/01314301-bee1-460d-8c33-93281f63a3ac-kube-api-access-shthv\") pod \"coredns-668d6bf9bc-6xj2r\" (UID: \"01314301-bee1-460d-8c33-93281f63a3ac\") " pod="kube-system/coredns-668d6bf9bc-6xj2r" May 9 23:44:53.262605 kubelet[2531]: I0509 23:44:53.262446 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01314301-bee1-460d-8c33-93281f63a3ac-config-volume\") pod \"coredns-668d6bf9bc-6xj2r\" (UID: \"01314301-bee1-460d-8c33-93281f63a3ac\") " pod="kube-system/coredns-668d6bf9bc-6xj2r" May 9 23:44:53.786216 kubelet[2531]: E0509 23:44:53.786187 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:54.365566 kubelet[2531]: E0509 23:44:54.365264 2531 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 9 23:44:54.365566 kubelet[2531]: E0509 23:44:54.365361 2531 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01314301-bee1-460d-8c33-93281f63a3ac-config-volume podName:01314301-bee1-460d-8c33-93281f63a3ac nodeName:}" failed. No retries permitted until 2025-05-09 23:44:54.865338845 +0000 UTC m=+22.263584527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/01314301-bee1-460d-8c33-93281f63a3ac-config-volume") pod "coredns-668d6bf9bc-6xj2r" (UID: "01314301-bee1-460d-8c33-93281f63a3ac") : failed to sync configmap cache: timed out waiting for the condition May 9 23:44:54.366896 kubelet[2531]: E0509 23:44:54.366856 2531 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 9 23:44:54.366959 kubelet[2531]: E0509 23:44:54.366938 2531 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2d85e10-cd93-420a-bcb5-8ae937335351-config-volume podName:e2d85e10-cd93-420a-bcb5-8ae937335351 nodeName:}" failed. No retries permitted until 2025-05-09 23:44:54.86691875 +0000 UTC m=+22.265164432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e2d85e10-cd93-420a-bcb5-8ae937335351-config-volume") pod "coredns-668d6bf9bc-qmhl5" (UID: "e2d85e10-cd93-420a-bcb5-8ae937335351") : failed to sync configmap cache: timed out waiting for the condition May 9 23:44:54.788009 kubelet[2531]: E0509 23:44:54.787976 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:54.912687 kubelet[2531]: E0509 23:44:54.912642 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:54.913437 containerd[1443]: time="2025-05-09T23:44:54.913399484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qmhl5,Uid:e2d85e10-cd93-420a-bcb5-8ae937335351,Namespace:kube-system,Attempt:0,}" May 9 23:44:54.922732 kubelet[2531]: E0509 23:44:54.922455 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:54.923375 containerd[1443]: time="2025-05-09T23:44:54.923223974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6xj2r,Uid:01314301-bee1-460d-8c33-93281f63a3ac,Namespace:kube-system,Attempt:0,}" May 9 23:44:55.110033 systemd-networkd[1386]: cilium_host: Link UP May 9 23:44:55.110165 systemd-networkd[1386]: cilium_net: Link UP May 9 23:44:55.110168 systemd-networkd[1386]: cilium_net: Gained carrier May 9 23:44:55.110294 systemd-networkd[1386]: cilium_host: Gained carrier May 9 23:44:55.110629 systemd-networkd[1386]: cilium_host: Gained IPv6LL May 9 23:44:55.180606 systemd-networkd[1386]: cilium_net: Gained IPv6LL May 9 23:44:55.210836 systemd-networkd[1386]: cilium_vxlan: Link UP May 9 23:44:55.210844 systemd-networkd[1386]: cilium_vxlan: Gained carrier May 9 23:44:55.528519 kernel: NET: Registered PF_ALG protocol family May 9 23:44:55.790285 kubelet[2531]: E0509 23:44:55.789933 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:56.183554 systemd-networkd[1386]: lxc_health: Link UP May 9 23:44:56.196507 systemd-networkd[1386]: lxc_health: Gained carrier May 9 23:44:56.300594 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL May 9 23:44:56.304362 kubelet[2531]: I0509 23:44:56.304065 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5sw4m" podStartSLOduration=8.940669621 podStartE2EDuration="17.304045946s" podCreationTimestamp="2025-05-09 23:44:39 +0000 UTC" firstStartedPulling="2025-05-09 23:44:40.517102628 +0000 UTC m=+7.915348310" lastFinishedPulling="2025-05-09 23:44:48.880478953 +0000 UTC m=+16.278724635" observedRunningTime="2025-05-09 23:44:53.909399521 +0000 UTC m=+21.307645283" watchObservedRunningTime="2025-05-09 23:44:56.304045946 +0000 UTC m=+23.702291628" May 9 23:44:56.630922 systemd-networkd[1386]: lxcfdd92214b7d0: Link UP May 9 23:44:56.636217 systemd-networkd[1386]: lxc4507f303b366: Link UP May 9 23:44:56.643550 kernel: eth0: renamed from tmpa8a2c May 9 23:44:56.650524 kernel: eth0: renamed from tmp38ae9 May 9 23:44:56.657840 systemd-networkd[1386]: lxc4507f303b366: Gained carrier May 9 23:44:56.658153 systemd-networkd[1386]: lxcfdd92214b7d0: Gained carrier May 9 23:44:56.791711 kubelet[2531]: E0509 23:44:56.791675 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:44:57.709023 systemd-networkd[1386]: lxc4507f303b366: Gained IPv6LL May 9 23:44:57.965992 systemd-networkd[1386]: lxc_health: Gained IPv6LL May 9 23:44:57.966254 systemd-networkd[1386]: lxcfdd92214b7d0: Gained IPv6LL May 9 23:44:59.667279 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:42808.service - OpenSSH per-connection server daemon (10.0.0.1:42808). May 9 23:44:59.716305 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 42808 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:44:59.717692 sshd-session[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:44:59.721497 systemd-logind[1424]: New session 8 of user core. May 9 23:44:59.728669 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:44:59.867206 sshd[3788]: Connection closed by 10.0.0.1 port 42808 May 9 23:44:59.867572 sshd-session[3786]: pam_unix(sshd:session): session closed for user core May 9 23:44:59.871813 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:42808.service: Deactivated successfully. May 9 23:44:59.873448 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:44:59.874706 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. May 9 23:44:59.876228 systemd-logind[1424]: Removed session 8. May 9 23:45:00.316382 containerd[1443]: time="2025-05-09T23:45:00.316282254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:00.316382 containerd[1443]: time="2025-05-09T23:45:00.316343685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:00.317113 containerd[1443]: time="2025-05-09T23:45:00.316385319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:00.317113 containerd[1443]: time="2025-05-09T23:45:00.316518138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:00.333050 containerd[1443]: time="2025-05-09T23:45:00.332938810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:00.333050 containerd[1443]: time="2025-05-09T23:45:00.333012879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:00.333586 containerd[1443]: time="2025-05-09T23:45:00.333440173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:00.333798 containerd[1443]: time="2025-05-09T23:45:00.333737967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:00.337218 systemd[1]: Started cri-containerd-38ae95d09e065912292e091f0168b868b63f3cfb0004189c8aeef0c5bcd95352.scope - libcontainer container 38ae95d09e065912292e091f0168b868b63f3cfb0004189c8aeef0c5bcd95352. May 9 23:45:00.351237 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:45:00.363701 systemd[1]: Started cri-containerd-a8a2ce2468ba5d4d7c2c3ab15e3fcb9c3f64b2e3a7914c5fb4cbcc5b11c51300.scope - libcontainer container a8a2ce2468ba5d4d7c2c3ab15e3fcb9c3f64b2e3a7914c5fb4cbcc5b11c51300. May 9 23:45:00.373596 containerd[1443]: time="2025-05-09T23:45:00.373560957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qmhl5,Uid:e2d85e10-cd93-420a-bcb5-8ae937335351,Namespace:kube-system,Attempt:0,} returns sandbox id \"38ae95d09e065912292e091f0168b868b63f3cfb0004189c8aeef0c5bcd95352\"" May 9 23:45:00.374268 kubelet[2531]: E0509 23:45:00.374237 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:00.376602 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:45:00.389011 containerd[1443]: time="2025-05-09T23:45:00.388709345Z" level=info msg="CreateContainer within sandbox \"38ae95d09e065912292e091f0168b868b63f3cfb0004189c8aeef0c5bcd95352\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:45:00.394403 containerd[1443]: time="2025-05-09T23:45:00.394361314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6xj2r,Uid:01314301-bee1-460d-8c33-93281f63a3ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a2ce2468ba5d4d7c2c3ab15e3fcb9c3f64b2e3a7914c5fb4cbcc5b11c51300\"" May 9 23:45:00.395114 kubelet[2531]: E0509 23:45:00.395079 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:00.402322 containerd[1443]: time="2025-05-09T23:45:00.402269337Z" level=info msg="CreateContainer within sandbox \"38ae95d09e065912292e091f0168b868b63f3cfb0004189c8aeef0c5bcd95352\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18df144b993a534f27772f51d92674bf7dcdcd3e220475b55331063315daaefb\"" May 9 23:45:00.403877 containerd[1443]: time="2025-05-09T23:45:00.402702150Z" level=info msg="StartContainer for \"18df144b993a534f27772f51d92674bf7dcdcd3e220475b55331063315daaefb\"" May 9 23:45:00.405071 containerd[1443]: time="2025-05-09T23:45:00.405019594Z" level=info msg="CreateContainer within sandbox \"a8a2ce2468ba5d4d7c2c3ab15e3fcb9c3f64b2e3a7914c5fb4cbcc5b11c51300\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:45:00.416400 containerd[1443]: time="2025-05-09T23:45:00.415360922Z" level=info msg="CreateContainer within sandbox \"a8a2ce2468ba5d4d7c2c3ab15e3fcb9c3f64b2e3a7914c5fb4cbcc5b11c51300\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74ffd867ebbda7f7fd6170ebcb1f506e216eab19840603928a77734c20574ef1\"" May 9 23:45:00.416400 containerd[1443]: time="2025-05-09T23:45:00.416005462Z" level=info msg="StartContainer for \"74ffd867ebbda7f7fd6170ebcb1f506e216eab19840603928a77734c20574ef1\"" May 9 23:45:00.429654 systemd[1]: Started cri-containerd-18df144b993a534f27772f51d92674bf7dcdcd3e220475b55331063315daaefb.scope - libcontainer container 18df144b993a534f27772f51d92674bf7dcdcd3e220475b55331063315daaefb. May 9 23:45:00.443638 systemd[1]: Started cri-containerd-74ffd867ebbda7f7fd6170ebcb1f506e216eab19840603928a77734c20574ef1.scope - libcontainer container 74ffd867ebbda7f7fd6170ebcb1f506e216eab19840603928a77734c20574ef1. May 9 23:45:00.464067 containerd[1443]: time="2025-05-09T23:45:00.464013272Z" level=info msg="StartContainer for \"18df144b993a534f27772f51d92674bf7dcdcd3e220475b55331063315daaefb\" returns successfully" May 9 23:45:00.484983 containerd[1443]: time="2025-05-09T23:45:00.484937490Z" level=info msg="StartContainer for \"74ffd867ebbda7f7fd6170ebcb1f506e216eab19840603928a77734c20574ef1\" returns successfully" May 9 23:45:00.802636 kubelet[2531]: E0509 23:45:00.802341 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:00.805719 kubelet[2531]: E0509 23:45:00.805267 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:00.814834 kubelet[2531]: I0509 23:45:00.814771 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6xj2r" podStartSLOduration=21.814753436 podStartE2EDuration="21.814753436s" podCreationTimestamp="2025-05-09 23:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:45:00.814570224 +0000 UTC m=+28.212815906" watchObservedRunningTime="2025-05-09 23:45:00.814753436 +0000 UTC m=+28.212999118" May 9 23:45:00.826414 kubelet[2531]: I0509 23:45:00.826358 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qmhl5" podStartSLOduration=21.826299539 podStartE2EDuration="21.826299539s" podCreationTimestamp="2025-05-09 23:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:45:00.825358444 +0000 UTC m=+28.223604126" watchObservedRunningTime="2025-05-09 23:45:00.826299539 +0000 UTC m=+28.224545221" May 9 23:45:01.805978 kubelet[2531]: E0509 23:45:01.805933 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:01.805978 kubelet[2531]: E0509 23:45:01.805966 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:02.808239 kubelet[2531]: E0509 23:45:02.807913 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:02.808239 kubelet[2531]: E0509 23:45:02.807986 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:04.887849 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:33244.service - OpenSSH per-connection server daemon (10.0.0.1:33244). May 9 23:45:04.937015 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 33244 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:04.938397 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:04.942880 systemd-logind[1424]: New session 9 of user core. May 9 23:45:04.959699 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:45:05.100747 sshd[3978]: Connection closed by 10.0.0.1 port 33244 May 9 23:45:05.101276 sshd-session[3976]: pam_unix(sshd:session): session closed for user core May 9 23:45:05.104257 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:33244.service: Deactivated successfully. May 9 23:45:05.106592 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:45:05.108106 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. May 9 23:45:05.109131 systemd-logind[1424]: Removed session 9. May 9 23:45:06.534427 kubelet[2531]: I0509 23:45:06.534300 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 23:45:06.534832 kubelet[2531]: E0509 23:45:06.534772 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:06.820674 kubelet[2531]: E0509 23:45:06.820562 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:10.111577 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:33258.service - OpenSSH per-connection server daemon (10.0.0.1:33258). May 9 23:45:10.164276 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 33258 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:10.165916 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:10.170311 systemd-logind[1424]: New session 10 of user core. May 9 23:45:10.182690 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 23:45:10.304799 sshd[3994]: Connection closed by 10.0.0.1 port 33258 May 9 23:45:10.305378 sshd-session[3992]: pam_unix(sshd:session): session closed for user core May 9 23:45:10.308671 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:33258.service: Deactivated successfully. May 9 23:45:10.310786 systemd[1]: session-10.scope: Deactivated successfully. May 9 23:45:10.311673 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. May 9 23:45:10.312847 systemd-logind[1424]: Removed session 10. May 9 23:45:15.320916 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:60996.service - OpenSSH per-connection server daemon (10.0.0.1:60996). May 9 23:45:15.378751 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 60996 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:15.380144 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:15.388919 systemd-logind[1424]: New session 11 of user core. May 9 23:45:15.398728 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 23:45:15.537699 sshd[4011]: Connection closed by 10.0.0.1 port 60996 May 9 23:45:15.538086 sshd-session[4009]: pam_unix(sshd:session): session closed for user core May 9 23:45:15.551304 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:60996.service: Deactivated successfully. May 9 23:45:15.553400 systemd[1]: session-11.scope: Deactivated successfully. May 9 23:45:15.555802 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. May 9 23:45:15.570826 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:32776.service - OpenSSH per-connection server daemon (10.0.0.1:32776). May 9 23:45:15.572183 systemd-logind[1424]: Removed session 11. May 9 23:45:15.616887 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:15.618332 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:15.622447 systemd-logind[1424]: New session 12 of user core. May 9 23:45:15.634938 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 23:45:15.796101 sshd[4027]: Connection closed by 10.0.0.1 port 32776 May 9 23:45:15.796487 sshd-session[4025]: pam_unix(sshd:session): session closed for user core May 9 23:45:15.809115 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:32776.service: Deactivated successfully. May 9 23:45:15.811643 systemd[1]: session-12.scope: Deactivated successfully. May 9 23:45:15.813144 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. May 9 23:45:15.819173 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:32792.service - OpenSSH per-connection server daemon (10.0.0.1:32792). May 9 23:45:15.821281 systemd-logind[1424]: Removed session 12. May 9 23:45:15.865995 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 32792 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:15.867424 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:15.871213 systemd-logind[1424]: New session 13 of user core. May 9 23:45:15.877662 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 23:45:15.993330 sshd[4039]: Connection closed by 10.0.0.1 port 32792 May 9 23:45:15.993569 sshd-session[4037]: pam_unix(sshd:session): session closed for user core May 9 23:45:15.997301 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:32792.service: Deactivated successfully. May 9 23:45:15.999840 systemd[1]: session-13.scope: Deactivated successfully. May 9 23:45:16.000737 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. May 9 23:45:16.001608 systemd-logind[1424]: Removed session 13. May 9 23:45:21.004590 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:32808.service - OpenSSH per-connection server daemon (10.0.0.1:32808). May 9 23:45:21.056491 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 32808 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:21.057991 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:21.066796 systemd-logind[1424]: New session 14 of user core. May 9 23:45:21.077857 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 23:45:21.206897 sshd[4055]: Connection closed by 10.0.0.1 port 32808 May 9 23:45:21.207509 sshd-session[4053]: pam_unix(sshd:session): session closed for user core May 9 23:45:21.211812 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:32808.service: Deactivated successfully. May 9 23:45:21.214056 systemd[1]: session-14.scope: Deactivated successfully. May 9 23:45:21.215024 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. May 9 23:45:21.216017 systemd-logind[1424]: Removed session 14. May 9 23:45:26.218297 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:37636.service - OpenSSH per-connection server daemon (10.0.0.1:37636). May 9 23:45:26.266667 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 37636 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:26.267999 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:26.271840 systemd-logind[1424]: New session 15 of user core. May 9 23:45:26.281664 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 23:45:26.412234 sshd[4069]: Connection closed by 10.0.0.1 port 37636 May 9 23:45:26.412630 sshd-session[4067]: pam_unix(sshd:session): session closed for user core May 9 23:45:26.423008 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:37636.service: Deactivated successfully. May 9 23:45:26.425305 systemd[1]: session-15.scope: Deactivated successfully. May 9 23:45:26.427227 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. May 9 23:45:26.440774 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:37642.service - OpenSSH per-connection server daemon (10.0.0.1:37642). May 9 23:45:26.442315 systemd-logind[1424]: Removed session 15. May 9 23:45:26.489042 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 37642 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:26.490223 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:26.495334 systemd-logind[1424]: New session 16 of user core. May 9 23:45:26.505639 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 23:45:26.793054 sshd[4084]: Connection closed by 10.0.0.1 port 37642 May 9 23:45:26.793127 sshd-session[4082]: pam_unix(sshd:session): session closed for user core May 9 23:45:26.811061 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:37642.service: Deactivated successfully. May 9 23:45:26.812705 systemd[1]: session-16.scope: Deactivated successfully. May 9 23:45:26.814087 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. May 9 23:45:26.820753 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:37646.service - OpenSSH per-connection server daemon (10.0.0.1:37646). May 9 23:45:26.821860 systemd-logind[1424]: Removed session 16. May 9 23:45:26.868559 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 37646 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:26.869928 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:26.874546 systemd-logind[1424]: New session 17 of user core. May 9 23:45:26.881638 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 23:45:27.673161 sshd[4097]: Connection closed by 10.0.0.1 port 37646 May 9 23:45:27.674011 sshd-session[4095]: pam_unix(sshd:session): session closed for user core May 9 23:45:27.683630 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:37646.service: Deactivated successfully. May 9 23:45:27.688794 systemd[1]: session-17.scope: Deactivated successfully. May 9 23:45:27.691762 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. May 9 23:45:27.699830 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:37662.service - OpenSSH per-connection server daemon (10.0.0.1:37662). May 9 23:45:27.702983 systemd-logind[1424]: Removed session 17. May 9 23:45:27.740567 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 37662 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:27.741464 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:27.745615 systemd-logind[1424]: New session 18 of user core. May 9 23:45:27.755665 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 23:45:27.997164 sshd[4118]: Connection closed by 10.0.0.1 port 37662 May 9 23:45:27.998621 sshd-session[4116]: pam_unix(sshd:session): session closed for user core May 9 23:45:28.006354 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:37662.service: Deactivated successfully. May 9 23:45:28.009540 systemd[1]: session-18.scope: Deactivated successfully. May 9 23:45:28.011775 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. May 9 23:45:28.026824 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:37676.service - OpenSSH per-connection server daemon (10.0.0.1:37676). May 9 23:45:28.028165 systemd-logind[1424]: Removed session 18. May 9 23:45:28.066871 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 37676 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:28.068348 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:28.073348 systemd-logind[1424]: New session 19 of user core. May 9 23:45:28.082686 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 23:45:28.200905 sshd[4130]: Connection closed by 10.0.0.1 port 37676 May 9 23:45:28.201617 sshd-session[4128]: pam_unix(sshd:session): session closed for user core May 9 23:45:28.205996 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:37676.service: Deactivated successfully. May 9 23:45:28.208715 systemd[1]: session-19.scope: Deactivated successfully. May 9 23:45:28.209649 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. May 9 23:45:28.210973 systemd-logind[1424]: Removed session 19. May 9 23:45:33.211294 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:50958.service - OpenSSH per-connection server daemon (10.0.0.1:50958). May 9 23:45:33.259328 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 50958 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:33.260699 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:33.266368 systemd-logind[1424]: New session 20 of user core. May 9 23:45:33.288021 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 23:45:33.399336 sshd[4149]: Connection closed by 10.0.0.1 port 50958 May 9 23:45:33.399706 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 9 23:45:33.403126 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:50958.service: Deactivated successfully. May 9 23:45:33.406075 systemd[1]: session-20.scope: Deactivated successfully. May 9 23:45:33.406741 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. May 9 23:45:33.408142 systemd-logind[1424]: Removed session 20. May 9 23:45:38.414144 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:51044.service - OpenSSH per-connection server daemon (10.0.0.1:51044). May 9 23:45:38.460187 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 51044 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:38.461919 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:38.466132 systemd-logind[1424]: New session 21 of user core. May 9 23:45:38.477880 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 23:45:38.614348 sshd[4163]: Connection closed by 10.0.0.1 port 51044 May 9 23:45:38.614710 sshd-session[4161]: pam_unix(sshd:session): session closed for user core May 9 23:45:38.618223 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:51044.service: Deactivated successfully. May 9 23:45:38.620454 systemd[1]: session-21.scope: Deactivated successfully. May 9 23:45:38.621889 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. May 9 23:45:38.623638 systemd-logind[1424]: Removed session 21. May 9 23:45:43.633802 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:47388.service - OpenSSH per-connection server daemon (10.0.0.1:47388). May 9 23:45:43.691773 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 47388 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:43.693057 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:43.699045 systemd-logind[1424]: New session 22 of user core. May 9 23:45:43.708739 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 23:45:43.839451 sshd[4179]: Connection closed by 10.0.0.1 port 47388 May 9 23:45:43.839987 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 9 23:45:43.853020 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:47388.service: Deactivated successfully. May 9 23:45:43.854882 systemd[1]: session-22.scope: Deactivated successfully. May 9 23:45:43.857717 systemd-logind[1424]: Session 22 logged out. Waiting for processes to exit. May 9 23:45:43.867763 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:47396.service - OpenSSH per-connection server daemon (10.0.0.1:47396). May 9 23:45:43.868950 systemd-logind[1424]: Removed session 22. May 9 23:45:43.918784 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 47396 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:43.919205 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:43.923641 systemd-logind[1424]: New session 23 of user core. May 9 23:45:43.938682 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 23:45:44.693592 kubelet[2531]: E0509 23:45:44.693550 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:45.674931 containerd[1443]: time="2025-05-09T23:45:45.674859804Z" level=info msg="StopContainer for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" with timeout 30 (s)" May 9 23:45:45.676028 containerd[1443]: time="2025-05-09T23:45:45.675822809Z" level=info msg="Stop container \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" with signal terminated" May 9 23:45:45.688144 systemd[1]: cri-containerd-7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba.scope: Deactivated successfully. May 9 23:45:45.711890 containerd[1443]: time="2025-05-09T23:45:45.711845920Z" level=info msg="StopContainer for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" with timeout 2 (s)" May 9 23:45:45.712185 containerd[1443]: time="2025-05-09T23:45:45.712164668Z" level=info msg="Stop container \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" with signal terminated" May 9 23:45:45.715980 containerd[1443]: time="2025-05-09T23:45:45.714428625Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:45:45.714922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba-rootfs.mount: Deactivated successfully. May 9 23:45:45.720270 systemd-networkd[1386]: lxc_health: Link DOWN May 9 23:45:45.720276 systemd-networkd[1386]: lxc_health: Lost carrier May 9 23:45:45.723538 containerd[1443]: time="2025-05-09T23:45:45.723293178Z" level=info msg="shim disconnected" id=7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba namespace=k8s.io May 9 23:45:45.723538 containerd[1443]: time="2025-05-09T23:45:45.723372815Z" level=warning msg="cleaning up after shim disconnected" id=7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba namespace=k8s.io May 9 23:45:45.723538 containerd[1443]: time="2025-05-09T23:45:45.723381374Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:45.745863 systemd[1]: cri-containerd-e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304.scope: Deactivated successfully. May 9 23:45:45.746245 systemd[1]: cri-containerd-e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304.scope: Consumed 6.796s CPU time. May 9 23:45:45.773523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304-rootfs.mount: Deactivated successfully. May 9 23:45:45.778436 containerd[1443]: time="2025-05-09T23:45:45.778374026Z" level=info msg="shim disconnected" id=e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304 namespace=k8s.io May 9 23:45:45.778436 containerd[1443]: time="2025-05-09T23:45:45.778429704Z" level=warning msg="cleaning up after shim disconnected" id=e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304 namespace=k8s.io May 9 23:45:45.778436 containerd[1443]: time="2025-05-09T23:45:45.778438104Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:45.789577 containerd[1443]: time="2025-05-09T23:45:45.789532774Z" level=info msg="StopContainer for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" returns successfully" May 9 23:45:45.792296 containerd[1443]: time="2025-05-09T23:45:45.792256674Z" level=info msg="StopPodSandbox for \"ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f\"" May 9 23:45:45.794198 containerd[1443]: time="2025-05-09T23:45:45.794165363Z" level=info msg="StopContainer for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" returns successfully" May 9 23:45:45.794671 containerd[1443]: time="2025-05-09T23:45:45.794627426Z" level=info msg="StopPodSandbox for \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\"" May 9 23:45:45.795828 containerd[1443]: time="2025-05-09T23:45:45.795613030Z" level=info msg="Container to stop \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.795828 containerd[1443]: time="2025-05-09T23:45:45.795702987Z" level=info msg="Container to stop \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.795828 containerd[1443]: time="2025-05-09T23:45:45.795714586Z" level=info msg="Container to stop \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.795828 containerd[1443]: time="2025-05-09T23:45:45.795724066Z" level=info msg="Container to stop \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.795828 containerd[1443]: time="2025-05-09T23:45:45.795732986Z" level=info msg="Container to stop \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.797028 containerd[1443]: time="2025-05-09T23:45:45.796967220Z" level=info msg="Container to stop \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:45:45.797573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a-shm.mount: Deactivated successfully. May 9 23:45:45.799766 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f-shm.mount: Deactivated successfully. May 9 23:45:45.802007 systemd[1]: cri-containerd-4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a.scope: Deactivated successfully. May 9 23:45:45.803587 systemd[1]: cri-containerd-ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f.scope: Deactivated successfully. May 9 23:45:45.825897 containerd[1443]: time="2025-05-09T23:45:45.825637403Z" level=info msg="shim disconnected" id=4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a namespace=k8s.io May 9 23:45:45.826338 containerd[1443]: time="2025-05-09T23:45:45.826002429Z" level=warning msg="cleaning up after shim disconnected" id=4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a namespace=k8s.io May 9 23:45:45.826338 containerd[1443]: time="2025-05-09T23:45:45.826021708Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:45.828620 containerd[1443]: time="2025-05-09T23:45:45.826314298Z" level=info msg="shim disconnected" id=ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f namespace=k8s.io May 9 23:45:45.828620 containerd[1443]: time="2025-05-09T23:45:45.828627492Z" level=warning msg="cleaning up after shim disconnected" id=ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f namespace=k8s.io May 9 23:45:45.828793 containerd[1443]: time="2025-05-09T23:45:45.828645452Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:45.842582 containerd[1443]: time="2025-05-09T23:45:45.842529380Z" level=info msg="TearDown network for sandbox \"ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f\" successfully" May 9 23:45:45.842582 containerd[1443]: time="2025-05-09T23:45:45.842569738Z" level=info msg="StopPodSandbox for \"ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f\" returns successfully" May 9 23:45:45.867647 containerd[1443]: time="2025-05-09T23:45:45.867600575Z" level=info msg="TearDown network for sandbox \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" successfully" May 9 23:45:45.867898 containerd[1443]: time="2025-05-09T23:45:45.867767049Z" level=info msg="StopPodSandbox for \"4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a\" returns successfully" May 9 23:45:45.908529 kubelet[2531]: I0509 23:45:45.907935 2531 scope.go:117] "RemoveContainer" containerID="7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba" May 9 23:45:45.910243 containerd[1443]: time="2025-05-09T23:45:45.910092368Z" level=info msg="RemoveContainer for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\"" May 9 23:45:45.910870 kubelet[2531]: I0509 23:45:45.910459 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-config-path\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.910870 kubelet[2531]: I0509 23:45:45.910514 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e3a8070-e3d9-4aef-b100-988898fc96be-cilium-config-path\") pod \"4e3a8070-e3d9-4aef-b100-988898fc96be\" (UID: \"4e3a8070-e3d9-4aef-b100-988898fc96be\") " May 9 23:45:45.910870 kubelet[2531]: I0509 23:45:45.910533 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-hostproc\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.910870 kubelet[2531]: I0509 23:45:45.910558 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852sq\" (UniqueName: \"kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-kube-api-access-852sq\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.910870 kubelet[2531]: I0509 23:45:45.910576 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a913bef2-8424-4e35-8911-c9845b5db6fd-clustermesh-secrets\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.910870 kubelet[2531]: I0509 23:45:45.910592 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-xtables-lock\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911080 kubelet[2531]: I0509 23:45:45.910607 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-net\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911080 kubelet[2531]: I0509 23:45:45.910621 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-cgroup\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911080 kubelet[2531]: I0509 23:45:45.910635 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-run\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911080 kubelet[2531]: I0509 23:45:45.910649 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-etc-cni-netd\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911080 kubelet[2531]: I0509 23:45:45.910667 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-hubble-tls\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911080 kubelet[2531]: I0509 23:45:45.910681 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cni-path\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911203 kubelet[2531]: I0509 23:45:45.910694 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-kernel\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911203 kubelet[2531]: I0509 23:45:45.910709 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-bpf-maps\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.911203 kubelet[2531]: I0509 23:45:45.910727 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jgck\" (UniqueName: \"kubernetes.io/projected/4e3a8070-e3d9-4aef-b100-988898fc96be-kube-api-access-4jgck\") pod \"4e3a8070-e3d9-4aef-b100-988898fc96be\" (UID: \"4e3a8070-e3d9-4aef-b100-988898fc96be\") " May 9 23:45:45.911203 kubelet[2531]: I0509 23:45:45.910743 2531 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-lib-modules\") pod \"a913bef2-8424-4e35-8911-c9845b5db6fd\" (UID: \"a913bef2-8424-4e35-8911-c9845b5db6fd\") " May 9 23:45:45.915546 kubelet[2531]: I0509 23:45:45.914607 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cni-path" (OuterVolumeSpecName: "cni-path") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.915546 kubelet[2531]: I0509 23:45:45.914662 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.915546 kubelet[2531]: I0509 23:45:45.914680 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.915546 kubelet[2531]: I0509 23:45:45.914607 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.915546 kubelet[2531]: I0509 23:45:45.914929 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.915723 containerd[1443]: time="2025-05-09T23:45:45.915411971Z" level=info msg="RemoveContainer for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" returns successfully" May 9 23:45:45.915755 kubelet[2531]: I0509 23:45:45.914962 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.916992 kubelet[2531]: I0509 23:45:45.916396 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.916992 kubelet[2531]: I0509 23:45:45.916453 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.916992 kubelet[2531]: I0509 23:45:45.916487 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.917115 kubelet[2531]: I0509 23:45:45.917015 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3a8070-e3d9-4aef-b100-988898fc96be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e3a8070-e3d9-4aef-b100-988898fc96be" (UID: "4e3a8070-e3d9-4aef-b100-988898fc96be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 23:45:45.918828 kubelet[2531]: I0509 23:45:45.918792 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-hostproc" (OuterVolumeSpecName: "hostproc") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:45:45.919515 kubelet[2531]: I0509 23:45:45.918901 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:45:45.919586 kubelet[2531]: I0509 23:45:45.919508 2531 scope.go:117] "RemoveContainer" containerID="7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba" May 9 23:45:45.919674 kubelet[2531]: I0509 23:45:45.919647 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-kube-api-access-852sq" (OuterVolumeSpecName: "kube-api-access-852sq") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "kube-api-access-852sq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:45:45.919757 containerd[1443]: time="2025-05-09T23:45:45.919703373Z" level=error msg="ContainerStatus for \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\": not found" May 9 23:45:45.919796 kubelet[2531]: I0509 23:45:45.919747 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3a8070-e3d9-4aef-b100-988898fc96be-kube-api-access-4jgck" (OuterVolumeSpecName: "kube-api-access-4jgck") pod "4e3a8070-e3d9-4aef-b100-988898fc96be" (UID: "4e3a8070-e3d9-4aef-b100-988898fc96be"). InnerVolumeSpecName "kube-api-access-4jgck". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:45:45.919887 kubelet[2531]: I0509 23:45:45.919828 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a913bef2-8424-4e35-8911-c9845b5db6fd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 9 23:45:45.921112 kubelet[2531]: I0509 23:45:45.921082 2531 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a913bef2-8424-4e35-8911-c9845b5db6fd" (UID: "a913bef2-8424-4e35-8911-c9845b5db6fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 23:45:45.925833 kubelet[2531]: E0509 23:45:45.925751 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\": not found" containerID="7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba" May 9 23:45:45.926075 kubelet[2531]: I0509 23:45:45.925984 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba"} err="failed to get container status \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"7384c63ff20dfe528174280ae8a08c2d5c4b21c04bd1f18d986d763e9ab060ba\": not found" May 9 23:45:45.926557 kubelet[2531]: I0509 23:45:45.926172 2531 scope.go:117] "RemoveContainer" containerID="e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304" May 9 23:45:45.929723 containerd[1443]: time="2025-05-09T23:45:45.929685445Z" level=info msg="RemoveContainer for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\"" May 9 23:45:45.932266 containerd[1443]: time="2025-05-09T23:45:45.932214272Z" level=info msg="RemoveContainer for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" returns successfully" May 9 23:45:45.932830 kubelet[2531]: I0509 23:45:45.932732 2531 scope.go:117] "RemoveContainer" containerID="536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac" May 9 23:45:45.934203 containerd[1443]: time="2025-05-09T23:45:45.934181079Z" level=info msg="RemoveContainer for \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\"" May 9 23:45:45.935379 systemd[1]: Removed slice kubepods-burstable-poda913bef2_8424_4e35_8911_c9845b5db6fd.slice - libcontainer container kubepods-burstable-poda913bef2_8424_4e35_8911_c9845b5db6fd.slice. May 9 23:45:45.935587 systemd[1]: kubepods-burstable-poda913bef2_8424_4e35_8911_c9845b5db6fd.slice: Consumed 6.937s CPU time. May 9 23:45:45.936889 containerd[1443]: time="2025-05-09T23:45:45.936790463Z" level=info msg="RemoveContainer for \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\" returns successfully" May 9 23:45:45.937046 kubelet[2531]: I0509 23:45:45.937014 2531 scope.go:117] "RemoveContainer" containerID="3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b" May 9 23:45:45.939638 containerd[1443]: time="2025-05-09T23:45:45.939405966Z" level=info msg="RemoveContainer for \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\"" May 9 23:45:45.941797 containerd[1443]: time="2025-05-09T23:45:45.941706521Z" level=info msg="RemoveContainer for \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\" returns successfully" May 9 23:45:45.941917 kubelet[2531]: I0509 23:45:45.941882 2531 scope.go:117] "RemoveContainer" containerID="e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9" May 9 23:45:45.943360 containerd[1443]: time="2025-05-09T23:45:45.943240425Z" level=info msg="RemoveContainer for \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\"" May 9 23:45:45.945925 containerd[1443]: time="2025-05-09T23:45:45.945880567Z" level=info msg="RemoveContainer for \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\" returns successfully" May 9 23:45:45.946219 kubelet[2531]: I0509 23:45:45.946192 2531 scope.go:117] "RemoveContainer" containerID="a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256" May 9 23:45:45.947124 containerd[1443]: time="2025-05-09T23:45:45.947102882Z" level=info msg="RemoveContainer for \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\"" May 9 23:45:45.949218 containerd[1443]: time="2025-05-09T23:45:45.949181886Z" level=info msg="RemoveContainer for \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\" returns successfully" May 9 23:45:45.949408 kubelet[2531]: I0509 23:45:45.949382 2531 scope.go:117] "RemoveContainer" containerID="e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304" May 9 23:45:45.949868 containerd[1443]: time="2025-05-09T23:45:45.949760904Z" level=error msg="ContainerStatus for \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\": not found" May 9 23:45:45.949961 kubelet[2531]: E0509 23:45:45.949893 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\": not found" containerID="e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304" May 9 23:45:45.949961 kubelet[2531]: I0509 23:45:45.949925 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304"} err="failed to get container status \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9991437df548cb5cf5164c3891b3906e7086bb09a43ce23baab149fc06d8304\": not found" May 9 23:45:45.949961 kubelet[2531]: I0509 23:45:45.949947 2531 scope.go:117] "RemoveContainer" containerID="536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac" May 9 23:45:45.950149 containerd[1443]: time="2025-05-09T23:45:45.950097012Z" level=error msg="ContainerStatus for \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\": not found" May 9 23:45:45.950261 kubelet[2531]: E0509 23:45:45.950237 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\": not found" containerID="536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac" May 9 23:45:45.950296 kubelet[2531]: I0509 23:45:45.950269 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac"} err="failed to get container status \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"536607a02a456fb3e79b1054834f8ddb383f275bf2abf3e7752827d249a597ac\": not found" May 9 23:45:45.950296 kubelet[2531]: I0509 23:45:45.950289 2531 scope.go:117] "RemoveContainer" containerID="3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b" May 9 23:45:45.950507 containerd[1443]: time="2025-05-09T23:45:45.950432560Z" level=error msg="ContainerStatus for \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\": not found" May 9 23:45:45.950654 kubelet[2531]: E0509 23:45:45.950635 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\": not found" containerID="3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b" May 9 23:45:45.950705 kubelet[2531]: I0509 23:45:45.950658 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b"} err="failed to get container status \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e73a0631b520cccb832f93bd5c632c9de66ab8f6ab1003131e4324954933f8b\": not found" May 9 23:45:45.950705 kubelet[2531]: I0509 23:45:45.950672 2531 scope.go:117] "RemoveContainer" containerID="e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9" May 9 23:45:45.950834 containerd[1443]: time="2025-05-09T23:45:45.950807946Z" level=error msg="ContainerStatus for \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\": not found" May 9 23:45:45.951048 kubelet[2531]: E0509 23:45:45.950928 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\": not found" containerID="e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9" May 9 23:45:45.951048 kubelet[2531]: I0509 23:45:45.950954 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9"} err="failed to get container status \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e012286d3494b2606f89ff824caa153cb9d05cbc76afcb29885fe9b905be50c9\": not found" May 9 23:45:45.951048 kubelet[2531]: I0509 23:45:45.950970 2531 scope.go:117] "RemoveContainer" containerID="a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256" May 9 23:45:45.951153 containerd[1443]: time="2025-05-09T23:45:45.951105055Z" level=error msg="ContainerStatus for \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\": not found" May 9 23:45:45.951222 kubelet[2531]: E0509 23:45:45.951198 2531 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\": not found" containerID="a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256" May 9 23:45:45.951267 kubelet[2531]: I0509 23:45:45.951221 2531 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256"} err="failed to get container status \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3097bccbe5b872c516600f63706045caa7826800e6f15d810873d552ba91256\": not found" May 9 23:45:46.011153 kubelet[2531]: I0509 23:45:46.011112 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011153 kubelet[2531]: I0509 23:45:46.011143 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011153 kubelet[2531]: I0509 23:45:46.011154 2531 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011153 kubelet[2531]: I0509 23:45:46.011164 2531 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011153 kubelet[2531]: I0509 23:45:46.011172 2531 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011179 2531 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011187 2531 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011195 2531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jgck\" (UniqueName: \"kubernetes.io/projected/4e3a8070-e3d9-4aef-b100-988898fc96be-kube-api-access-4jgck\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011204 2531 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011211 2531 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011218 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e3a8070-e3d9-4aef-b100-988898fc96be-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011225 2531 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011385 kubelet[2531]: I0509 23:45:46.011244 2531 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a913bef2-8424-4e35-8911-c9845b5db6fd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011583 kubelet[2531]: I0509 23:45:46.011252 2531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-852sq\" (UniqueName: \"kubernetes.io/projected/a913bef2-8424-4e35-8911-c9845b5db6fd-kube-api-access-852sq\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011583 kubelet[2531]: I0509 23:45:46.011260 2531 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a913bef2-8424-4e35-8911-c9845b5db6fd-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.011583 kubelet[2531]: I0509 23:45:46.011268 2531 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a913bef2-8424-4e35-8911-c9845b5db6fd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 23:45:46.213977 systemd[1]: Removed slice kubepods-besteffort-pod4e3a8070_e3d9_4aef_b100_988898fc96be.slice - libcontainer container kubepods-besteffort-pod4e3a8070_e3d9_4aef_b100_988898fc96be.slice. May 9 23:45:46.692400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae595ab31efae46ef9bbb9361ab6e64ceb19ff8f43e8a81dceaa990b84caff1f-rootfs.mount: Deactivated successfully. May 9 23:45:46.692523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4943b12e582fcebea8173bf6b2acb19951c19653dcb014c6e4e8a506f6ebf56a-rootfs.mount: Deactivated successfully. May 9 23:45:46.692578 systemd[1]: var-lib-kubelet-pods-4e3a8070\x2de3d9\x2d4aef\x2db100\x2d988898fc96be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jgck.mount: Deactivated successfully. May 9 23:45:46.692632 systemd[1]: var-lib-kubelet-pods-a913bef2\x2d8424\x2d4e35\x2d8911\x2dc9845b5db6fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d852sq.mount: Deactivated successfully. May 9 23:45:46.692695 systemd[1]: var-lib-kubelet-pods-a913bef2\x2d8424\x2d4e35\x2d8911\x2dc9845b5db6fd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 23:45:46.692742 systemd[1]: var-lib-kubelet-pods-a913bef2\x2d8424\x2d4e35\x2d8911\x2dc9845b5db6fd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 23:45:46.696610 kubelet[2531]: I0509 23:45:46.695798 2531 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3a8070-e3d9-4aef-b100-988898fc96be" path="/var/lib/kubelet/pods/4e3a8070-e3d9-4aef-b100-988898fc96be/volumes" May 9 23:45:46.696610 kubelet[2531]: I0509 23:45:46.696159 2531 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a913bef2-8424-4e35-8911-c9845b5db6fd" path="/var/lib/kubelet/pods/a913bef2-8424-4e35-8911-c9845b5db6fd/volumes" May 9 23:45:47.630376 sshd[4193]: Connection closed by 10.0.0.1 port 47396 May 9 23:45:47.632142 sshd-session[4191]: pam_unix(sshd:session): session closed for user core May 9 23:45:47.645135 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:47396.service: Deactivated successfully. May 9 23:45:47.646866 systemd[1]: session-23.scope: Deactivated successfully. May 9 23:45:47.647038 systemd[1]: session-23.scope: Consumed 1.026s CPU time. May 9 23:45:47.648213 systemd-logind[1424]: Session 23 logged out. Waiting for processes to exit. May 9 23:45:47.652899 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:47402.service - OpenSSH per-connection server daemon (10.0.0.1:47402). May 9 23:45:47.654303 systemd-logind[1424]: Removed session 23. May 9 23:45:47.698503 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 47402 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:47.699969 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:47.704238 systemd-logind[1424]: New session 24 of user core. May 9 23:45:47.719702 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 23:45:47.740574 kubelet[2531]: E0509 23:45:47.740525 2531 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 23:45:48.322443 sshd[4355]: Connection closed by 10.0.0.1 port 47402 May 9 23:45:48.322873 sshd-session[4353]: pam_unix(sshd:session): session closed for user core May 9 23:45:48.334047 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:47402.service: Deactivated successfully. May 9 23:45:48.336446 systemd[1]: session-24.scope: Deactivated successfully. May 9 23:45:48.340249 systemd-logind[1424]: Session 24 logged out. Waiting for processes to exit. May 9 23:45:48.349090 kubelet[2531]: I0509 23:45:48.349021 2531 memory_manager.go:355] "RemoveStaleState removing state" podUID="4e3a8070-e3d9-4aef-b100-988898fc96be" containerName="cilium-operator" May 9 23:45:48.349090 kubelet[2531]: I0509 23:45:48.349053 2531 memory_manager.go:355] "RemoveStaleState removing state" podUID="a913bef2-8424-4e35-8911-c9845b5db6fd" containerName="cilium-agent" May 9 23:45:48.350161 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:47418.service - OpenSSH per-connection server daemon (10.0.0.1:47418). May 9 23:45:48.354025 systemd-logind[1424]: Removed session 24. May 9 23:45:48.362850 systemd[1]: Created slice kubepods-burstable-pod568ef72f_a561_4b99_b333_e634cf0beb4f.slice - libcontainer container kubepods-burstable-pod568ef72f_a561_4b99_b333_e634cf0beb4f.slice. May 9 23:45:48.408142 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 47418 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:48.409517 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:48.413567 systemd-logind[1424]: New session 25 of user core. May 9 23:45:48.421011 kubelet[2531]: I0509 23:45:48.420975 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-lib-modules\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421098 kubelet[2531]: I0509 23:45:48.421016 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/568ef72f-a561-4b99-b333-e634cf0beb4f-clustermesh-secrets\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421098 kubelet[2531]: I0509 23:45:48.421039 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-cilium-run\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421098 kubelet[2531]: I0509 23:45:48.421055 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/568ef72f-a561-4b99-b333-e634cf0beb4f-cilium-ipsec-secrets\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421098 kubelet[2531]: I0509 23:45:48.421075 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-host-proc-sys-kernel\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421098 kubelet[2531]: I0509 23:45:48.421091 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-cni-path\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421226 kubelet[2531]: I0509 23:45:48.421106 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-xtables-lock\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421226 kubelet[2531]: I0509 23:45:48.421122 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-host-proc-sys-net\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421226 kubelet[2531]: I0509 23:45:48.421161 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-hostproc\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421226 kubelet[2531]: I0509 23:45:48.421180 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/568ef72f-a561-4b99-b333-e634cf0beb4f-hubble-tls\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421226 kubelet[2531]: I0509 23:45:48.421198 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98rd\" (UniqueName: \"kubernetes.io/projected/568ef72f-a561-4b99-b333-e634cf0beb4f-kube-api-access-f98rd\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421226 kubelet[2531]: I0509 23:45:48.421217 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/568ef72f-a561-4b99-b333-e634cf0beb4f-cilium-config-path\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421350 kubelet[2531]: I0509 23:45:48.421235 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-bpf-maps\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421350 kubelet[2531]: I0509 23:45:48.421249 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-cilium-cgroup\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.421350 kubelet[2531]: I0509 23:45:48.421266 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/568ef72f-a561-4b99-b333-e634cf0beb4f-etc-cni-netd\") pod \"cilium-hlv7l\" (UID: \"568ef72f-a561-4b99-b333-e634cf0beb4f\") " pod="kube-system/cilium-hlv7l" May 9 23:45:48.423659 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 23:45:48.473580 sshd[4368]: Connection closed by 10.0.0.1 port 47418 May 9 23:45:48.474156 sshd-session[4366]: pam_unix(sshd:session): session closed for user core May 9 23:45:48.490107 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:47418.service: Deactivated successfully. May 9 23:45:48.492852 systemd[1]: session-25.scope: Deactivated successfully. May 9 23:45:48.494620 systemd-logind[1424]: Session 25 logged out. Waiting for processes to exit. May 9 23:45:48.502714 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:47420.service - OpenSSH per-connection server daemon (10.0.0.1:47420). May 9 23:45:48.504193 systemd-logind[1424]: Removed session 25. May 9 23:45:48.551315 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 47420 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 9 23:45:48.552778 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:45:48.556244 systemd-logind[1424]: New session 26 of user core. May 9 23:45:48.569667 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 23:45:48.666174 kubelet[2531]: E0509 23:45:48.665758 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:48.666889 containerd[1443]: time="2025-05-09T23:45:48.666603942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hlv7l,Uid:568ef72f-a561-4b99-b333-e634cf0beb4f,Namespace:kube-system,Attempt:0,}" May 9 23:45:48.686947 containerd[1443]: time="2025-05-09T23:45:48.686851343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:45:48.686947 containerd[1443]: time="2025-05-09T23:45:48.686904381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:45:48.686947 containerd[1443]: time="2025-05-09T23:45:48.686915581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:48.687242 containerd[1443]: time="2025-05-09T23:45:48.686982979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:45:48.708690 systemd[1]: Started cri-containerd-60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca.scope - libcontainer container 60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca. May 9 23:45:48.730783 containerd[1443]: time="2025-05-09T23:45:48.730732912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hlv7l,Uid:568ef72f-a561-4b99-b333-e634cf0beb4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\"" May 9 23:45:48.731639 kubelet[2531]: E0509 23:45:48.731415 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:48.733414 containerd[1443]: time="2025-05-09T23:45:48.733382743Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:45:48.742635 containerd[1443]: time="2025-05-09T23:45:48.742585034Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882\"" May 9 23:45:48.743069 containerd[1443]: time="2025-05-09T23:45:48.743042339Z" level=info msg="StartContainer for \"8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882\"" May 9 23:45:48.768676 systemd[1]: Started cri-containerd-8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882.scope - libcontainer container 8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882. May 9 23:45:48.800495 containerd[1443]: time="2025-05-09T23:45:48.800435894Z" level=info msg="StartContainer for \"8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882\" returns successfully" May 9 23:45:48.815638 systemd[1]: cri-containerd-8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882.scope: Deactivated successfully. May 9 23:45:48.846723 containerd[1443]: time="2025-05-09T23:45:48.846637105Z" level=info msg="shim disconnected" id=8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882 namespace=k8s.io May 9 23:45:48.846723 containerd[1443]: time="2025-05-09T23:45:48.846703583Z" level=warning msg="cleaning up after shim disconnected" id=8cec74adbe2ca4885852979d531d78dc067db6ebcdbecf440e12980cc6a8b882 namespace=k8s.io May 9 23:45:48.846723 containerd[1443]: time="2025-05-09T23:45:48.846712382Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:48.934708 kubelet[2531]: E0509 23:45:48.934525 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:48.937824 containerd[1443]: time="2025-05-09T23:45:48.937780288Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:45:48.954031 containerd[1443]: time="2025-05-09T23:45:48.953957466Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0\"" May 9 23:45:48.954543 containerd[1443]: time="2025-05-09T23:45:48.954514047Z" level=info msg="StartContainer for \"11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0\"" May 9 23:45:48.976716 systemd[1]: Started cri-containerd-11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0.scope - libcontainer container 11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0. May 9 23:45:48.996438 containerd[1443]: time="2025-05-09T23:45:48.996368604Z" level=info msg="StartContainer for \"11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0\" returns successfully" May 9 23:45:49.003435 systemd[1]: cri-containerd-11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0.scope: Deactivated successfully. May 9 23:45:49.031519 containerd[1443]: time="2025-05-09T23:45:49.031395661Z" level=info msg="shim disconnected" id=11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0 namespace=k8s.io May 9 23:45:49.031519 containerd[1443]: time="2025-05-09T23:45:49.031460298Z" level=warning msg="cleaning up after shim disconnected" id=11af554239088df8693d093873080254e032dad53c805a152573aca44ebf9aa0 namespace=k8s.io May 9 23:45:49.031717 containerd[1443]: time="2025-05-09T23:45:49.031470498Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:49.938700 kubelet[2531]: E0509 23:45:49.938672 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:49.941344 containerd[1443]: time="2025-05-09T23:45:49.941304261Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:45:49.973634 containerd[1443]: time="2025-05-09T23:45:49.973577453Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6\"" May 9 23:45:49.974331 containerd[1443]: time="2025-05-09T23:45:49.974295550Z" level=info msg="StartContainer for \"818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6\"" May 9 23:45:50.001670 systemd[1]: Started cri-containerd-818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6.scope - libcontainer container 818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6. May 9 23:45:50.037753 containerd[1443]: time="2025-05-09T23:45:50.037623209Z" level=info msg="StartContainer for \"818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6\" returns successfully" May 9 23:45:50.040221 systemd[1]: cri-containerd-818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6.scope: Deactivated successfully. May 9 23:45:50.063221 containerd[1443]: time="2025-05-09T23:45:50.063148246Z" level=info msg="shim disconnected" id=818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6 namespace=k8s.io May 9 23:45:50.063221 containerd[1443]: time="2025-05-09T23:45:50.063210004Z" level=warning msg="cleaning up after shim disconnected" id=818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6 namespace=k8s.io May 9 23:45:50.063221 containerd[1443]: time="2025-05-09T23:45:50.063220284Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:50.526175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-818fea0908e5301e88b7b357777d8296f69d3a07b109883971d97a8e9b6a97b6-rootfs.mount: Deactivated successfully. May 9 23:45:50.944147 kubelet[2531]: E0509 23:45:50.942717 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:50.944554 containerd[1443]: time="2025-05-09T23:45:50.944517909Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:45:50.968147 containerd[1443]: time="2025-05-09T23:45:50.968086807Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc\"" May 9 23:45:50.968915 containerd[1443]: time="2025-05-09T23:45:50.968887062Z" level=info msg="StartContainer for \"f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc\"" May 9 23:45:50.999650 systemd[1]: Started cri-containerd-f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc.scope - libcontainer container f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc. May 9 23:45:51.023707 systemd[1]: cri-containerd-f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc.scope: Deactivated successfully. May 9 23:45:51.025997 containerd[1443]: time="2025-05-09T23:45:51.025850093Z" level=info msg="StartContainer for \"f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc\" returns successfully" May 9 23:45:51.048123 containerd[1443]: time="2025-05-09T23:45:51.047901821Z" level=info msg="shim disconnected" id=f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc namespace=k8s.io May 9 23:45:51.048123 containerd[1443]: time="2025-05-09T23:45:51.047963019Z" level=warning msg="cleaning up after shim disconnected" id=f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc namespace=k8s.io May 9 23:45:51.048123 containerd[1443]: time="2025-05-09T23:45:51.047971139Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:45:51.526285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4615fc04aada7f158b63375d682d993258aa0e70d7b6eb065bc09c7043e44fc-rootfs.mount: Deactivated successfully. May 9 23:45:51.947715 kubelet[2531]: E0509 23:45:51.947292 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:51.949365 containerd[1443]: time="2025-05-09T23:45:51.949326939Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:45:51.982239 containerd[1443]: time="2025-05-09T23:45:51.982167218Z" level=info msg="CreateContainer within sandbox \"60d666ea5b3dc5c19a217cfd5309a099a77d2cb8ba571c6ada88802aa058c9ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7bce9f95f31b04d7eb26a039b3df84c487cb5d3f20beaf1155c26837c6980b03\"" May 9 23:45:51.982970 containerd[1443]: time="2025-05-09T23:45:51.982900155Z" level=info msg="StartContainer for \"7bce9f95f31b04d7eb26a039b3df84c487cb5d3f20beaf1155c26837c6980b03\"" May 9 23:45:52.014698 systemd[1]: Started cri-containerd-7bce9f95f31b04d7eb26a039b3df84c487cb5d3f20beaf1155c26837c6980b03.scope - libcontainer container 7bce9f95f31b04d7eb26a039b3df84c487cb5d3f20beaf1155c26837c6980b03. May 9 23:45:52.046208 containerd[1443]: time="2025-05-09T23:45:52.046165149Z" level=info msg="StartContainer for \"7bce9f95f31b04d7eb26a039b3df84c487cb5d3f20beaf1155c26837c6980b03\" returns successfully" May 9 23:45:52.308519 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 9 23:45:52.951489 kubelet[2531]: E0509 23:45:52.951433 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:53.008260 kubelet[2531]: I0509 23:45:53.008200 2531 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hlv7l" podStartSLOduration=5.008184142 podStartE2EDuration="5.008184142s" podCreationTimestamp="2025-05-09 23:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:45:53.007757754 +0000 UTC m=+80.406003436" watchObservedRunningTime="2025-05-09 23:45:53.008184142 +0000 UTC m=+80.406429824" May 9 23:45:54.667352 kubelet[2531]: E0509 23:45:54.667305 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:55.115405 systemd-networkd[1386]: lxc_health: Link UP May 9 23:45:55.127967 systemd-networkd[1386]: lxc_health: Gained carrier May 9 23:45:56.204904 systemd-networkd[1386]: lxc_health: Gained IPv6LL May 9 23:45:56.668059 kubelet[2531]: E0509 23:45:56.668014 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:56.965464 kubelet[2531]: E0509 23:45:56.965357 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:57.693773 kubelet[2531]: E0509 23:45:57.693388 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:45:57.967640 kubelet[2531]: E0509 23:45:57.967413 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:46:03.522840 sshd[4380]: Connection closed by 10.0.0.1 port 47420 May 9 23:46:03.523728 sshd-session[4374]: pam_unix(sshd:session): session closed for user core May 9 23:46:03.526442 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:47420.service: Deactivated successfully. May 9 23:46:03.528271 systemd[1]: session-26.scope: Deactivated successfully. May 9 23:46:03.529696 systemd-logind[1424]: Session 26 logged out. Waiting for processes to exit. May 9 23:46:03.530760 systemd-logind[1424]: Removed session 26.