May 9 23:28:53.906365 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 9 23:28:53.906388 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 9 22:04:47 -00 2025 May 9 23:28:53.906397 kernel: KASLR enabled May 9 23:28:53.906403 kernel: efi: EFI v2.7 by EDK II May 9 23:28:53.906409 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 9 23:28:53.906414 kernel: random: crng init done May 9 23:28:53.906421 kernel: secureboot: Secure boot disabled May 9 23:28:53.906427 kernel: ACPI: Early table checksum verification disabled May 9 23:28:53.906433 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 9 23:28:53.906440 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 9 23:28:53.906446 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906452 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906458 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906464 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906471 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906479 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906485 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906491 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906497 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 23:28:53.906503 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 9 23:28:53.906509 kernel: NUMA: Failed to initialise from firmware May 9 23:28:53.906516 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:28:53.906522 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 9 23:28:53.906528 kernel: Zone ranges: May 9 23:28:53.906534 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:28:53.906542 kernel: DMA32 empty May 9 23:28:53.906548 kernel: Normal empty May 9 23:28:53.906554 kernel: Movable zone start for each node May 9 23:28:53.906560 kernel: Early memory node ranges May 9 23:28:53.906566 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 9 23:28:53.906573 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 9 23:28:53.906579 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 9 23:28:53.906585 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 9 23:28:53.906593 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 9 23:28:53.906603 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 9 23:28:53.906611 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 9 23:28:53.906617 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 9 23:28:53.906625 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 9 23:28:53.906631 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 9 23:28:53.906638 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 9 23:28:53.906647 kernel: psci: probing for conduit method from ACPI. May 9 23:28:53.906654 kernel: psci: PSCIv1.1 detected in firmware. May 9 23:28:53.906660 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:28:53.906668 kernel: psci: Trusted OS migration not required May 9 23:28:53.906680 kernel: psci: SMC Calling Convention v1.1 May 9 23:28:53.906688 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 9 23:28:53.906694 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:28:53.906701 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:28:53.906707 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 9 23:28:53.906714 kernel: Detected PIPT I-cache on CPU0 May 9 23:28:53.906720 kernel: CPU features: detected: GIC system register CPU interface May 9 23:28:53.906727 kernel: CPU features: detected: Hardware dirty bit management May 9 23:28:53.906733 kernel: CPU features: detected: Spectre-v4 May 9 23:28:53.906741 kernel: CPU features: detected: Spectre-BHB May 9 23:28:53.906748 kernel: CPU features: kernel page table isolation forced ON by KASLR May 9 23:28:53.906755 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 9 23:28:53.906761 kernel: CPU features: detected: ARM erratum 1418040 May 9 23:28:53.906768 kernel: CPU features: detected: SSBS not fully self-synchronizing May 9 23:28:53.906774 kernel: alternatives: applying boot alternatives May 9 23:28:53.906782 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7c902da1543bd328e5d473315e9f14002c4b98c30eacfaf0678d5ea87545bd30 May 9 23:28:53.906788 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:28:53.906795 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:28:53.906801 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:28:53.906808 kernel: Fallback order for Node 0: 0 May 9 23:28:53.906816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 9 23:28:53.906822 kernel: Policy zone: DMA May 9 23:28:53.906829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:28:53.906835 kernel: software IO TLB: area num 4. May 9 23:28:53.906842 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 9 23:28:53.906849 kernel: Memory: 2387348K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184940K reserved, 0K cma-reserved) May 9 23:28:53.906855 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 23:28:53.906870 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:28:53.906877 kernel: rcu: RCU event tracing is enabled. May 9 23:28:53.906884 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 23:28:53.906890 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:28:53.906897 kernel: Tracing variant of Tasks RCU enabled. May 9 23:28:53.906906 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:28:53.906912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 23:28:53.906919 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:28:53.906925 kernel: GICv3: 256 SPIs implemented May 9 23:28:53.906932 kernel: GICv3: 0 Extended SPIs implemented May 9 23:28:53.906938 kernel: Root IRQ handler: gic_handle_irq May 9 23:28:53.906945 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 9 23:28:53.906951 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 9 23:28:53.906958 kernel: ITS [mem 0x08080000-0x0809ffff] May 9 23:28:53.906965 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:28:53.906971 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 9 23:28:53.906979 kernel: GICv3: using LPI property table @0x00000000400f0000 May 9 23:28:53.906986 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 9 23:28:53.906993 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:28:53.906999 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:28:53.907006 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 9 23:28:53.907012 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 9 23:28:53.907019 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 9 23:28:53.907026 kernel: arm-pv: using stolen time PV May 9 23:28:53.907032 kernel: Console: colour dummy device 80x25 May 9 23:28:53.907039 kernel: ACPI: Core revision 20230628 May 9 23:28:53.907046 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 9 23:28:53.907054 kernel: pid_max: default: 32768 minimum: 301 May 9 23:28:53.907061 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:28:53.907068 kernel: landlock: Up and running. May 9 23:28:53.907074 kernel: SELinux: Initializing. May 9 23:28:53.907081 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:28:53.907088 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:28:53.907095 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 9 23:28:53.907102 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:28:53.907109 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 23:28:53.907117 kernel: rcu: Hierarchical SRCU implementation. May 9 23:28:53.907124 kernel: rcu: Max phase no-delay instances is 400. May 9 23:28:53.907130 kernel: Platform MSI: ITS@0x8080000 domain created May 9 23:28:53.907137 kernel: PCI/MSI: ITS@0x8080000 domain created May 9 23:28:53.907144 kernel: Remapping and enabling EFI services. May 9 23:28:53.907150 kernel: smp: Bringing up secondary CPUs ... May 9 23:28:53.907157 kernel: Detected PIPT I-cache on CPU1 May 9 23:28:53.907163 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 9 23:28:53.907170 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 9 23:28:53.907178 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:28:53.907185 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 9 23:28:53.907197 kernel: Detected PIPT I-cache on CPU2 May 9 23:28:53.907205 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 9 23:28:53.907212 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 9 23:28:53.907219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:28:53.907226 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 9 23:28:53.907233 kernel: Detected PIPT I-cache on CPU3 May 9 23:28:53.907240 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 9 23:28:53.907247 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 9 23:28:53.907256 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 9 23:28:53.907263 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 9 23:28:53.907270 kernel: smp: Brought up 1 node, 4 CPUs May 9 23:28:53.907277 kernel: SMP: Total of 4 processors activated. May 9 23:28:53.907284 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:28:53.907291 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 9 23:28:53.907298 kernel: CPU features: detected: Common not Private translations May 9 23:28:53.907306 kernel: CPU features: detected: CRC32 instructions May 9 23:28:53.907314 kernel: CPU features: detected: Enhanced Virtualization Traps May 9 23:28:53.907321 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 9 23:28:53.907331 kernel: CPU features: detected: LSE atomic instructions May 9 23:28:53.907338 kernel: CPU features: detected: Privileged Access Never May 9 23:28:53.907350 kernel: CPU features: detected: RAS Extension Support May 9 23:28:53.907359 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 9 23:28:53.907366 kernel: CPU: All CPU(s) started at EL1 May 9 23:28:53.907373 kernel: alternatives: applying system-wide alternatives May 9 23:28:53.907381 kernel: devtmpfs: initialized May 9 23:28:53.907389 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:28:53.907396 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 23:28:53.907403 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:28:53.907410 kernel: SMBIOS 3.0.0 present. May 9 23:28:53.907417 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 9 23:28:53.907424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:28:53.907431 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:28:53.907438 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:28:53.907447 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:28:53.907454 kernel: audit: initializing netlink subsys (disabled) May 9 23:28:53.907461 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 9 23:28:53.907467 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:28:53.907474 kernel: cpuidle: using governor menu May 9 23:28:53.907481 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:28:53.907488 kernel: ASID allocator initialised with 32768 entries May 9 23:28:53.907495 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:28:53.907502 kernel: Serial: AMBA PL011 UART driver May 9 23:28:53.907510 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 9 23:28:53.907517 kernel: Modules: 0 pages in range for non-PLT usage May 9 23:28:53.907524 kernel: Modules: 509232 pages in range for PLT usage May 9 23:28:53.907530 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:28:53.907537 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:28:53.907544 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:28:53.907551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:28:53.907558 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:28:53.907565 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:28:53.907573 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:28:53.907580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:28:53.907587 kernel: ACPI: Added _OSI(Module Device) May 9 23:28:53.907594 kernel: ACPI: Added _OSI(Processor Device) May 9 23:28:53.907601 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:28:53.907608 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:28:53.907615 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:28:53.907621 kernel: ACPI: Interpreter enabled May 9 23:28:53.907628 kernel: ACPI: Using GIC for interrupt routing May 9 23:28:53.907635 kernel: ACPI: MCFG table detected, 1 entries May 9 23:28:53.907643 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 9 23:28:53.907650 kernel: printk: console [ttyAMA0] enabled May 9 23:28:53.907657 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 23:28:53.907786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:28:53.907867 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:28:53.907940 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:28:53.908006 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 9 23:28:53.908073 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 9 23:28:53.908082 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 9 23:28:53.908089 kernel: PCI host bridge to bus 0000:00 May 9 23:28:53.908159 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 9 23:28:53.908221 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:28:53.908278 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 9 23:28:53.908342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 23:28:53.908441 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 9 23:28:53.908515 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 9 23:28:53.908582 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 9 23:28:53.908646 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 9 23:28:53.908718 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:28:53.908797 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 9 23:28:53.908875 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 9 23:28:53.908949 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 9 23:28:53.909026 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 9 23:28:53.909087 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:28:53.909145 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 9 23:28:53.909155 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:28:53.909162 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:28:53.909169 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:28:53.909178 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:28:53.909185 kernel: iommu: Default domain type: Translated May 9 23:28:53.909192 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:28:53.909199 kernel: efivars: Registered efivars operations May 9 23:28:53.909205 kernel: vgaarb: loaded May 9 23:28:53.909212 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:28:53.909219 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:28:53.909226 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:28:53.909232 kernel: pnp: PnP ACPI init May 9 23:28:53.909304 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 9 23:28:53.909314 kernel: pnp: PnP ACPI: found 1 devices May 9 23:28:53.909321 kernel: NET: Registered PF_INET protocol family May 9 23:28:53.909328 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:28:53.909335 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:28:53.909341 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:28:53.909356 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:28:53.909363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:28:53.909373 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:28:53.909380 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:28:53.909386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:28:53.909393 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:28:53.909400 kernel: PCI: CLS 0 bytes, default 64 May 9 23:28:53.909407 kernel: kvm [1]: HYP mode not available May 9 23:28:53.909414 kernel: Initialise system trusted keyrings May 9 23:28:53.909420 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:28:53.909427 kernel: Key type asymmetric registered May 9 23:28:53.909435 kernel: Asymmetric key parser 'x509' registered May 9 23:28:53.909442 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:28:53.909449 kernel: io scheduler mq-deadline registered May 9 23:28:53.909456 kernel: io scheduler kyber registered May 9 23:28:53.909463 kernel: io scheduler bfq registered May 9 23:28:53.909469 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:28:53.909476 kernel: ACPI: button: Power Button [PWRB] May 9 23:28:53.909483 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:28:53.909551 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 9 23:28:53.909561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:28:53.909570 kernel: thunder_xcv, ver 1.0 May 9 23:28:53.909576 kernel: thunder_bgx, ver 1.0 May 9 23:28:53.909583 kernel: nicpf, ver 1.0 May 9 23:28:53.909590 kernel: nicvf, ver 1.0 May 9 23:28:53.909663 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:28:53.909730 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:28:53 UTC (1746833333) May 9 23:28:53.909740 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:28:53.909747 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 9 23:28:53.909756 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:28:53.909763 kernel: watchdog: Hard watchdog permanently disabled May 9 23:28:53.909769 kernel: NET: Registered PF_INET6 protocol family May 9 23:28:53.909776 kernel: Segment Routing with IPv6 May 9 23:28:53.909783 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:28:53.909790 kernel: NET: Registered PF_PACKET protocol family May 9 23:28:53.909797 kernel: Key type dns_resolver registered May 9 23:28:53.909803 kernel: registered taskstats version 1 May 9 23:28:53.909810 kernel: Loading compiled-in X.509 certificates May 9 23:28:53.909818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 7cea787079d886e0b02698e14259d03ca649108a' May 9 23:28:53.909825 kernel: Key type .fscrypt registered May 9 23:28:53.909832 kernel: Key type fscrypt-provisioning registered May 9 23:28:53.909840 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:28:53.909847 kernel: ima: Allocated hash algorithm: sha1 May 9 23:28:53.909853 kernel: ima: No architecture policies found May 9 23:28:53.909868 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:28:53.909875 kernel: clk: Disabling unused clocks May 9 23:28:53.909884 kernel: Freeing unused kernel memory: 38464K May 9 23:28:53.909891 kernel: Run /init as init process May 9 23:28:53.909898 kernel: with arguments: May 9 23:28:53.909905 kernel: /init May 9 23:28:53.909911 kernel: with environment: May 9 23:28:53.909918 kernel: HOME=/ May 9 23:28:53.909925 kernel: TERM=linux May 9 23:28:53.909931 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:28:53.909939 systemd[1]: Successfully made /usr/ read-only. May 9 23:28:53.909950 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 23:28:53.909958 systemd[1]: Detected virtualization kvm. May 9 23:28:53.909965 systemd[1]: Detected architecture arm64. May 9 23:28:53.909972 systemd[1]: Running in initrd. May 9 23:28:53.909980 systemd[1]: No hostname configured, using default hostname. May 9 23:28:53.909988 systemd[1]: Hostname set to . May 9 23:28:53.909995 systemd[1]: Initializing machine ID from VM UUID. May 9 23:28:53.910003 systemd[1]: Queued start job for default target initrd.target. May 9 23:28:53.910012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:28:53.910020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:28:53.910027 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:28:53.910035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:28:53.910043 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:28:53.910051 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:28:53.910064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:28:53.910073 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:28:53.910080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:28:53.910088 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:28:53.910095 systemd[1]: Reached target paths.target - Path Units. May 9 23:28:53.910103 systemd[1]: Reached target slices.target - Slice Units. May 9 23:28:53.910110 systemd[1]: Reached target swap.target - Swaps. May 9 23:28:53.910117 systemd[1]: Reached target timers.target - Timer Units. May 9 23:28:53.910125 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:28:53.910134 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:28:53.910142 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:28:53.910149 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 9 23:28:53.910157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:28:53.910164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:28:53.910172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:28:53.910179 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:28:53.910187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:28:53.910196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:28:53.910218 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:28:53.910226 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:28:53.910233 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:28:53.910241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:28:53.910248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:28:53.910256 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:28:53.910263 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:28:53.910273 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:28:53.910280 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:28:53.910288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:28:53.910312 systemd-journald[236]: Collecting audit messages is disabled. May 9 23:28:53.910331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:28:53.910339 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:28:53.910353 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:28:53.910362 systemd-journald[236]: Journal started May 9 23:28:53.910381 systemd-journald[236]: Runtime Journal (/run/log/journal/21748f2c6823449cbe01dbd63096ee44) is 5.9M, max 47.3M, 41.4M free. May 9 23:28:53.894068 systemd-modules-load[237]: Inserted module 'overlay' May 9 23:28:53.911944 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:28:53.912888 kernel: Bridge firewalling registered May 9 23:28:53.912917 systemd-modules-load[237]: Inserted module 'br_netfilter' May 9 23:28:53.913734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:28:53.916560 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:28:53.920024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:28:53.923662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:28:53.930527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:28:53.932793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:28:53.935063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:28:53.936226 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:28:53.939045 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:28:53.940738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:28:53.958639 dracut-cmdline[279]: dracut-dracut-053 May 9 23:28:53.961037 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7c902da1543bd328e5d473315e9f14002c4b98c30eacfaf0678d5ea87545bd30 May 9 23:28:53.977471 systemd-resolved[280]: Positive Trust Anchors: May 9 23:28:53.977487 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:28:53.977517 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:28:53.982220 systemd-resolved[280]: Defaulting to hostname 'linux'. May 9 23:28:53.985059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:28:53.985926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:28:54.033889 kernel: SCSI subsystem initialized May 9 23:28:54.037879 kernel: Loading iSCSI transport class v2.0-870. May 9 23:28:54.045882 kernel: iscsi: registered transport (tcp) May 9 23:28:54.057883 kernel: iscsi: registered transport (qla4xxx) May 9 23:28:54.057908 kernel: QLogic iSCSI HBA Driver May 9 23:28:54.098202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:28:54.100147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:28:54.129146 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:28:54.129186 kernel: device-mapper: uevent: version 1.0.3 May 9 23:28:54.129201 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:28:54.177888 kernel: raid6: neonx8 gen() 15764 MB/s May 9 23:28:54.194887 kernel: raid6: neonx4 gen() 15780 MB/s May 9 23:28:54.211880 kernel: raid6: neonx2 gen() 13189 MB/s May 9 23:28:54.228878 kernel: raid6: neonx1 gen() 10492 MB/s May 9 23:28:54.245881 kernel: raid6: int64x8 gen() 6786 MB/s May 9 23:28:54.262882 kernel: raid6: int64x4 gen() 7343 MB/s May 9 23:28:54.279885 kernel: raid6: int64x2 gen() 6108 MB/s May 9 23:28:54.296882 kernel: raid6: int64x1 gen() 5056 MB/s May 9 23:28:54.296906 kernel: raid6: using algorithm neonx4 gen() 15780 MB/s May 9 23:28:54.313888 kernel: raid6: .... xor() 12344 MB/s, rmw enabled May 9 23:28:54.313904 kernel: raid6: using neon recovery algorithm May 9 23:28:54.320112 kernel: xor: measuring software checksum speed May 9 23:28:54.320134 kernel: 8regs : 21658 MB/sec May 9 23:28:54.321105 kernel: 32regs : 21681 MB/sec May 9 23:28:54.321123 kernel: arm64_neon : 27965 MB/sec May 9 23:28:54.321132 kernel: xor: using function: arm64_neon (27965 MB/sec) May 9 23:28:54.372886 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:28:54.383885 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:28:54.386046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:28:54.413028 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 9 23:28:54.416699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:28:54.419011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:28:54.440773 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 9 23:28:54.465425 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:28:54.467354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:28:54.517409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:28:54.520520 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:28:54.545757 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:28:54.547539 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:28:54.549227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:28:54.550997 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:28:54.553388 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:28:54.568506 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 9 23:28:54.568660 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 23:28:54.572945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:28:54.573044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:28:54.575911 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:28:54.577557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:28:54.582529 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:28:54.582552 kernel: GPT:9289727 != 19775487 May 9 23:28:54.582561 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:28:54.582570 kernel: GPT:9289727 != 19775487 May 9 23:28:54.582579 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:28:54.582590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:28:54.577702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:28:54.581388 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:28:54.584023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:28:54.586150 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:28:54.597921 kernel: BTRFS: device fsid 9dddd49f-987e-4c0c-90d7-c673faae2ed3 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (521) May 9 23:28:54.601892 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) May 9 23:28:54.611815 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 23:28:54.613049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:28:54.629928 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 23:28:54.637097 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:28:54.642971 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 23:28:54.643828 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 23:28:54.646493 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:28:54.648931 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:28:54.670437 disk-uuid[553]: Primary Header is updated. May 9 23:28:54.670437 disk-uuid[553]: Secondary Entries is updated. May 9 23:28:54.670437 disk-uuid[553]: Secondary Header is updated. May 9 23:28:54.678898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:28:54.679889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:28:55.686075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 23:28:55.686659 disk-uuid[559]: The operation has completed successfully. May 9 23:28:55.713042 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:28:55.713157 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:28:55.737280 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:28:55.754681 sh[575]: Success May 9 23:28:55.774913 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:28:55.802344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:28:55.804854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:28:55.818144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:28:55.823504 kernel: BTRFS info (device dm-0): first mount of filesystem 9dddd49f-987e-4c0c-90d7-c673faae2ed3 May 9 23:28:55.823537 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:28:55.823548 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:28:55.824283 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:28:55.825297 kernel: BTRFS info (device dm-0): using free space tree May 9 23:28:55.828851 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:28:55.830060 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:28:55.830847 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:28:55.832663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:28:55.856399 kernel: BTRFS info (device vda6): first mount of filesystem 9f46d0d0-587d-4bb8-818f-2b5d7666c988 May 9 23:28:55.856446 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:28:55.856457 kernel: BTRFS info (device vda6): using free space tree May 9 23:28:55.858884 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:28:55.862890 kernel: BTRFS info (device vda6): last unmount of filesystem 9f46d0d0-587d-4bb8-818f-2b5d7666c988 May 9 23:28:55.865607 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:28:55.867599 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:28:55.931721 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:28:55.934689 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:28:55.970157 systemd-networkd[757]: lo: Link UP May 9 23:28:55.970171 systemd-networkd[757]: lo: Gained carrier May 9 23:28:55.971003 systemd-networkd[757]: Enumeration completed May 9 23:28:55.971139 ignition[664]: Ignition 2.20.0 May 9 23:28:55.971295 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:28:55.971145 ignition[664]: Stage: fetch-offline May 9 23:28:55.971459 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:28:55.971174 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 9 23:28:55.971463 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:28:55.971182 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:28:55.972174 systemd-networkd[757]: eth0: Link UP May 9 23:28:55.971389 ignition[664]: parsed url from cmdline: "" May 9 23:28:55.972177 systemd-networkd[757]: eth0: Gained carrier May 9 23:28:55.971392 ignition[664]: no config URL provided May 9 23:28:55.972184 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:28:55.971397 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:28:55.972770 systemd[1]: Reached target network.target - Network. May 9 23:28:55.971404 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 9 23:28:55.986906 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:28:55.971426 ignition[664]: op(1): [started] loading QEMU firmware config module May 9 23:28:55.971430 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 23:28:55.977772 ignition[664]: op(1): [finished] loading QEMU firmware config module May 9 23:28:56.022511 ignition[664]: parsing config with SHA512: d3f18ce7d2125e4fffb828a79e918a874c5d5ee508596a583d40c5d02393e9770a813885926ddc9bc654b8f04dbc45f9ad6a886308aa95531386dfd72be8fa22 May 9 23:28:56.028778 unknown[664]: fetched base config from "system" May 9 23:28:56.028789 unknown[664]: fetched user config from "qemu" May 9 23:28:56.029315 ignition[664]: fetch-offline: fetch-offline passed May 9 23:28:56.030962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:28:56.029401 ignition[664]: Ignition finished successfully May 9 23:28:56.032086 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 23:28:56.032834 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:28:56.057705 ignition[770]: Ignition 2.20.0 May 9 23:28:56.057715 ignition[770]: Stage: kargs May 9 23:28:56.057889 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 9 23:28:56.057900 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:28:56.058769 ignition[770]: kargs: kargs passed May 9 23:28:56.058816 ignition[770]: Ignition finished successfully May 9 23:28:56.060849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:28:56.063018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:28:56.090080 ignition[779]: Ignition 2.20.0 May 9 23:28:56.090094 ignition[779]: Stage: disks May 9 23:28:56.090242 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 9 23:28:56.092543 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:28:56.090252 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:28:56.093654 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:28:56.091124 ignition[779]: disks: disks passed May 9 23:28:56.094739 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:28:56.091173 ignition[779]: Ignition finished successfully May 9 23:28:56.096366 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:28:56.097624 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:28:56.098634 systemd[1]: Reached target basic.target - Basic System. May 9 23:28:56.100943 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:28:56.125550 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:28:56.129515 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:28:56.131553 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:28:56.186883 kernel: EXT4-fs (vda9): mounted filesystem 11538969-c6f0-43aa-95ce-fbf226f50270 r/w with ordered data mode. Quota mode: none. May 9 23:28:56.187297 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:28:56.188340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:28:56.190787 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:28:56.192675 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:28:56.193593 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:28:56.193633 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:28:56.193657 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:28:56.201326 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:28:56.203253 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:28:56.206897 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) May 9 23:28:56.209173 kernel: BTRFS info (device vda6): first mount of filesystem 9f46d0d0-587d-4bb8-818f-2b5d7666c988 May 9 23:28:56.209235 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:28:56.209248 kernel: BTRFS info (device vda6): using free space tree May 9 23:28:56.210875 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:28:56.211735 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:28:56.256139 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:28:56.259970 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory May 9 23:28:56.263919 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:28:56.267752 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:28:56.344782 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:28:56.346984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:28:56.348271 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:28:56.361894 kernel: BTRFS info (device vda6): last unmount of filesystem 9f46d0d0-587d-4bb8-818f-2b5d7666c988 May 9 23:28:56.374422 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:28:56.382707 ignition[912]: INFO : Ignition 2.20.0 May 9 23:28:56.382707 ignition[912]: INFO : Stage: mount May 9 23:28:56.384719 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:28:56.384719 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:28:56.384719 ignition[912]: INFO : mount: mount passed May 9 23:28:56.384719 ignition[912]: INFO : Ignition finished successfully May 9 23:28:56.385426 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:28:56.387141 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:28:56.948374 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:28:56.949774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:28:56.970606 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) May 9 23:28:56.970646 kernel: BTRFS info (device vda6): first mount of filesystem 9f46d0d0-587d-4bb8-818f-2b5d7666c988 May 9 23:28:56.970657 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 9 23:28:56.971878 kernel: BTRFS info (device vda6): using free space tree May 9 23:28:56.973873 kernel: BTRFS info (device vda6): auto enabling async discard May 9 23:28:56.974805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:28:57.005571 ignition[943]: INFO : Ignition 2.20.0 May 9 23:28:57.005571 ignition[943]: INFO : Stage: files May 9 23:28:57.006871 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:28:57.006871 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:28:57.006871 ignition[943]: DEBUG : files: compiled without relabeling support, skipping May 9 23:28:57.009628 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:28:57.009628 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:28:57.012080 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:28:57.013107 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:28:57.013107 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:28:57.012603 unknown[943]: wrote ssh authorized keys file for user: core May 9 23:28:57.015796 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:28:57.015796 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 9 23:28:57.064884 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:28:57.065993 systemd-networkd[757]: eth0: Gained IPv6LL May 9 23:28:57.247498 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:28:57.249211 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:28:57.249211 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 9 23:28:57.583598 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:28:57.648524 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:28:57.650322 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 23:28:57.973316 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:28:58.932237 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:28:58.932237 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 9 23:28:58.935051 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 9 23:28:58.951519 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:28:58.954343 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 23:28:58.956645 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 9 23:28:58.956645 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 9 23:28:58.956645 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:28:58.956645 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:28:58.956645 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:28:58.956645 ignition[943]: INFO : files: files passed May 9 23:28:58.956645 ignition[943]: INFO : Ignition finished successfully May 9 23:28:58.957006 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:28:58.959622 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:28:58.961998 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:28:58.974781 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:28:58.974877 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:28:58.977740 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 9 23:28:58.978783 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:28:58.978783 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:28:58.980884 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:28:58.980160 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:28:58.982196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:28:58.984273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:28:59.029728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:28:59.029824 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:28:59.031591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:28:59.032842 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:28:59.034313 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:28:59.034990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:28:59.057379 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:28:59.059574 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:28:59.074906 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:28:59.075788 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:28:59.077296 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:28:59.078615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:28:59.078724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:28:59.080547 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:28:59.082104 systemd[1]: Stopped target basic.target - Basic System. May 9 23:28:59.083312 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:28:59.084566 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:28:59.085932 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:28:59.087509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:28:59.088804 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:28:59.090328 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:28:59.091702 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:28:59.092943 systemd[1]: Stopped target swap.target - Swaps. May 9 23:28:59.094171 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:28:59.094280 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:28:59.096017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:28:59.097436 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:28:59.098832 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:28:59.099958 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:28:59.101126 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:28:59.101238 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:28:59.103455 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:28:59.103572 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:28:59.104967 systemd[1]: Stopped target paths.target - Path Units. May 9 23:28:59.106200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:28:59.109909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:28:59.112032 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:28:59.112740 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:28:59.113996 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:28:59.114073 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:28:59.115228 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:28:59.115307 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:28:59.116579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:28:59.116691 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:28:59.117905 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:28:59.118008 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:28:59.119828 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:28:59.121214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:28:59.121354 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:28:59.134299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:28:59.134939 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:28:59.135053 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:28:59.136463 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:28:59.136573 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:28:59.141820 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:28:59.141936 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:28:59.144878 ignition[1000]: INFO : Ignition 2.20.0 May 9 23:28:59.144878 ignition[1000]: INFO : Stage: umount May 9 23:28:59.146159 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:28:59.146159 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 23:28:59.146159 ignition[1000]: INFO : umount: umount passed May 9 23:28:59.146159 ignition[1000]: INFO : Ignition finished successfully May 9 23:28:59.147927 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:28:59.148388 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:28:59.148471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:28:59.150028 systemd[1]: Stopped target network.target - Network. May 9 23:28:59.150857 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:28:59.150966 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:28:59.152499 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:28:59.152545 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:28:59.153745 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:28:59.153784 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:28:59.155290 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:28:59.155349 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:28:59.156827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:28:59.158122 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:28:59.165466 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:28:59.166655 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:28:59.170482 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 9 23:28:59.170667 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:28:59.170757 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:28:59.173075 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 9 23:28:59.173667 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:28:59.173723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:28:59.175454 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:28:59.176731 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:28:59.176790 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:28:59.178592 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:28:59.178634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:28:59.180583 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:28:59.180625 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:28:59.182022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:28:59.182060 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:28:59.184312 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:28:59.187416 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 9 23:28:59.187474 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 9 23:28:59.202203 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:28:59.202356 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:28:59.205395 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:28:59.205496 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:28:59.207291 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:28:59.207406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:28:59.208833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:28:59.208895 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:28:59.210322 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:28:59.210377 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:28:59.212699 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:28:59.212746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:28:59.214829 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:28:59.214889 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:28:59.218063 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:28:59.220044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:28:59.220104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:28:59.222592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:28:59.222633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:28:59.225799 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 9 23:28:59.225853 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 9 23:28:59.226152 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:28:59.226230 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:28:59.227770 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:28:59.227851 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:28:59.230528 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:28:59.230618 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:28:59.232134 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:28:59.233982 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:28:59.250189 systemd[1]: Switching root. May 9 23:28:59.278710 systemd-journald[236]: Journal stopped May 9 23:29:00.036596 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). May 9 23:29:00.036655 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:29:00.036667 kernel: SELinux: policy capability open_perms=1 May 9 23:29:00.036679 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:29:00.036689 kernel: SELinux: policy capability always_check_network=0 May 9 23:29:00.036698 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:29:00.036707 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:29:00.036717 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:29:00.036726 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:29:00.036736 kernel: audit: type=1403 audit(1746833339.438:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:29:00.036748 systemd[1]: Successfully loaded SELinux policy in 30.183ms. May 9 23:29:00.036769 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.831ms. May 9 23:29:00.036783 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 23:29:00.036795 systemd[1]: Detected virtualization kvm. May 9 23:29:00.036805 systemd[1]: Detected architecture arm64. May 9 23:29:00.036815 systemd[1]: Detected first boot. May 9 23:29:00.036825 systemd[1]: Initializing machine ID from VM UUID. May 9 23:29:00.036836 zram_generator::config[1048]: No configuration found. May 9 23:29:00.036847 kernel: NET: Registered PF_VSOCK protocol family May 9 23:29:00.036857 systemd[1]: Populated /etc with preset unit settings. May 9 23:29:00.036899 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 9 23:29:00.036912 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:29:00.036922 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:29:00.036936 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:29:00.036946 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:29:00.036957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:29:00.036967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:29:00.036977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:29:00.036988 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:29:00.037000 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:29:00.037011 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:29:00.037022 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:29:00.037032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:29:00.037043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:29:00.037053 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:29:00.037063 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:29:00.037074 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:29:00.037085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:29:00.037097 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 9 23:29:00.037107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:29:00.037118 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:29:00.037128 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:29:00.037139 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:29:00.037149 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:29:00.037161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:29:00.037173 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:29:00.037184 systemd[1]: Reached target slices.target - Slice Units. May 9 23:29:00.037194 systemd[1]: Reached target swap.target - Swaps. May 9 23:29:00.037205 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:29:00.037215 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:29:00.037226 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 9 23:29:00.037237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:29:00.037247 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:29:00.037258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:29:00.037268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:29:00.037280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:29:00.037291 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:29:00.037301 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:29:00.037315 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:29:00.037328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:29:00.037338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:29:00.037349 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:29:00.037360 systemd[1]: Reached target machines.target - Containers. May 9 23:29:00.037372 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:29:00.037383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:29:00.037393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:29:00.037404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:29:00.037414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:29:00.037425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:29:00.037440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:29:00.037450 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:29:00.037461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:29:00.037473 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:29:00.037483 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:29:00.037493 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:29:00.037504 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:29:00.037518 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:29:00.037528 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 23:29:00.037539 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:29:00.037549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:29:00.037562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:29:00.037572 kernel: loop: module loaded May 9 23:29:00.037581 kernel: fuse: init (API version 7.39) May 9 23:29:00.037591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:29:00.037601 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 9 23:29:00.037611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:29:00.037623 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:29:00.037634 systemd[1]: Stopped verity-setup.service. May 9 23:29:00.037644 kernel: ACPI: bus type drm_connector registered May 9 23:29:00.037653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:29:00.037664 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:29:00.037674 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:29:00.037684 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:29:00.037694 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:29:00.037706 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:29:00.037717 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:29:00.037727 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:29:00.037758 systemd-journald[1113]: Collecting audit messages is disabled. May 9 23:29:00.037783 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:29:00.037793 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:29:00.037804 systemd-journald[1113]: Journal started May 9 23:29:00.037827 systemd-journald[1113]: Runtime Journal (/run/log/journal/21748f2c6823449cbe01dbd63096ee44) is 5.9M, max 47.3M, 41.4M free. May 9 23:28:59.835856 systemd[1]: Queued start job for default target multi-user.target. May 9 23:28:59.851706 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 23:28:59.852095 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:29:00.039943 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:29:00.040685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:29:00.040856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:29:00.041966 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:29:00.042127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:29:00.043125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:29:00.043274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:29:00.044481 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:29:00.044628 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:29:00.045736 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:29:00.046095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:29:00.047186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:29:00.048255 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:29:00.049457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:29:00.050850 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 9 23:29:00.063130 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:29:00.065328 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:29:00.067165 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:29:00.068026 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:29:00.068052 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:29:00.069649 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 9 23:29:00.077083 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:29:00.078882 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:29:00.079705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:29:00.080962 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:29:00.085022 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:29:00.086067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:29:00.088063 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:29:00.089116 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:29:00.090072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:29:00.093302 systemd-journald[1113]: Time spent on flushing to /var/log/journal/21748f2c6823449cbe01dbd63096ee44 is 24.847ms for 871 entries. May 9 23:29:00.093302 systemd-journald[1113]: System Journal (/var/log/journal/21748f2c6823449cbe01dbd63096ee44) is 8M, max 195.6M, 187.6M free. May 9 23:29:00.129695 systemd-journald[1113]: Received client request to flush runtime journal. May 9 23:29:00.129996 kernel: loop0: detected capacity change from 0 to 126448 May 9 23:29:00.094233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:29:00.097235 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:29:00.111915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:29:00.113161 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:29:00.114134 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:29:00.115345 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:29:00.119776 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:29:00.121103 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:29:00.125634 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 9 23:29:00.130985 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:29:00.132987 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:29:00.135397 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:29:00.146893 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:29:00.149590 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:29:00.163915 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:29:00.166197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:29:00.176258 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 9 23:29:00.185894 kernel: loop1: detected capacity change from 0 to 103832 May 9 23:29:00.192655 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 9 23:29:00.192673 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 9 23:29:00.197684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:29:00.237416 kernel: loop2: detected capacity change from 0 to 201592 May 9 23:29:00.284301 kernel: loop3: detected capacity change from 0 to 126448 May 9 23:29:00.303905 kernel: loop4: detected capacity change from 0 to 103832 May 9 23:29:00.310899 kernel: loop5: detected capacity change from 0 to 201592 May 9 23:29:00.320806 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 23:29:00.321267 (sd-merge)[1191]: Merged extensions into '/usr'. May 9 23:29:00.324618 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:29:00.324636 systemd[1]: Reloading... May 9 23:29:00.383173 zram_generator::config[1219]: No configuration found. May 9 23:29:00.473631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:29:00.519278 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:29:00.523593 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:29:00.523836 systemd[1]: Reloading finished in 198 ms. May 9 23:29:00.548526 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:29:00.549692 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:29:00.560152 systemd[1]: Starting ensure-sysext.service... May 9 23:29:00.561670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:29:00.575572 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... May 9 23:29:00.575590 systemd[1]: Reloading... May 9 23:29:00.580477 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:29:00.580685 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:29:00.581415 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:29:00.581626 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 9 23:29:00.581683 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 9 23:29:00.584130 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:29:00.584143 systemd-tmpfiles[1254]: Skipping /boot May 9 23:29:00.593092 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:29:00.593100 systemd-tmpfiles[1254]: Skipping /boot May 9 23:29:00.624894 zram_generator::config[1284]: No configuration found. May 9 23:29:00.708113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:29:00.758462 systemd[1]: Reloading finished in 182 ms. May 9 23:29:00.769293 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:29:00.785908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:29:00.793159 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:29:00.795163 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:29:00.797825 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:29:00.801020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:29:00.803394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:29:00.806137 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:29:00.809775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:29:00.814357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:29:00.824571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:29:00.826924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:29:00.827832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:29:00.827970 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 23:29:00.830897 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:29:00.832496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:29:00.832640 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:29:00.834202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:29:00.834375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:29:00.835918 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:29:00.836050 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:29:00.845637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:29:00.850471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:29:00.852064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:29:00.853718 systemd-udevd[1325]: Using default interface naming scheme 'v255'. May 9 23:29:00.853765 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:29:00.855491 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:29:00.863641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:29:00.864737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:29:00.864858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 23:29:00.869655 augenrules[1358]: No rules May 9 23:29:00.870538 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:29:00.873198 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:29:00.876911 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:29:00.877548 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:29:00.878992 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:29:00.883083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:29:00.892497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:29:00.892652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:29:00.894256 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:29:00.894415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:29:00.895976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:29:00.896139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:29:00.897709 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:29:00.897856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:29:00.899734 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:29:00.904531 systemd[1]: Finished ensure-sysext.service. May 9 23:29:00.917339 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 9 23:29:00.919327 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:29:00.923081 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:29:00.923164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:29:00.925973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1379) May 9 23:29:00.927106 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 23:29:00.930919 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:29:00.950995 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:29:00.979787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 23:29:00.985033 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:29:01.013244 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:29:01.024548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 23:29:01.026303 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:29:01.052521 systemd-resolved[1323]: Positive Trust Anchors: May 9 23:29:01.052538 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:29:01.052569 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:29:01.058978 systemd-resolved[1323]: Defaulting to hostname 'linux'. May 9 23:29:01.060353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:29:01.061413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:29:01.062403 systemd-networkd[1391]: lo: Link UP May 9 23:29:01.062406 systemd-networkd[1391]: lo: Gained carrier May 9 23:29:01.063660 systemd-networkd[1391]: Enumeration completed May 9 23:29:01.063810 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:29:01.065459 systemd[1]: Reached target network.target - Network. May 9 23:29:01.066082 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:29:01.066090 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:29:01.066690 systemd-networkd[1391]: eth0: Link UP May 9 23:29:01.066750 systemd-networkd[1391]: eth0: Gained carrier May 9 23:29:01.066798 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:29:01.068232 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 9 23:29:01.071121 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:29:01.081966 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 23:29:01.082983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:29:01.085634 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. May 9 23:29:01.087087 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 23:29:01.087145 systemd-timesyncd[1392]: Initial clock synchronization to Fri 2025-05-09 23:29:00.786215 UTC. May 9 23:29:01.095920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:29:01.097259 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 9 23:29:01.100240 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:29:01.134230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:29:01.137799 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:29:01.177434 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:29:01.178702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:29:01.180993 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:29:01.181819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:29:01.182839 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:29:01.183962 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:29:01.184846 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:29:01.185781 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:29:01.186895 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:29:01.186932 systemd[1]: Reached target paths.target - Path Units. May 9 23:29:01.187612 systemd[1]: Reached target timers.target - Timer Units. May 9 23:29:01.189497 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:29:01.191739 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:29:01.194959 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 9 23:29:01.196099 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 9 23:29:01.197125 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 9 23:29:01.200827 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:29:01.202030 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 9 23:29:01.203922 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:29:01.205199 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:29:01.206088 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:29:01.206810 systemd[1]: Reached target basic.target - Basic System. May 9 23:29:01.207747 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:29:01.207780 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:29:01.208702 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:29:01.210537 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:29:01.211346 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:29:01.214969 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:29:01.216761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:29:01.218132 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:29:01.219928 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:29:01.221652 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:29:01.226590 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:29:01.230605 jq[1431]: false May 9 23:29:01.230921 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:29:01.234696 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:29:01.236767 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:29:01.237212 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:29:01.242671 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:29:01.244504 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:29:01.246399 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:29:01.249997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:29:01.250190 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:29:01.250823 dbus-daemon[1430]: [system] SELinux support is enabled May 9 23:29:01.254254 extend-filesystems[1432]: Found loop3 May 9 23:29:01.255177 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:29:01.257909 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:29:01.258517 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:29:01.259740 extend-filesystems[1432]: Found loop4 May 9 23:29:01.259740 extend-filesystems[1432]: Found loop5 May 9 23:29:01.259740 extend-filesystems[1432]: Found vda May 9 23:29:01.259740 extend-filesystems[1432]: Found vda1 May 9 23:29:01.259740 extend-filesystems[1432]: Found vda2 May 9 23:29:01.259740 extend-filesystems[1432]: Found vda3 May 9 23:29:01.259740 extend-filesystems[1432]: Found usr May 9 23:29:01.259740 extend-filesystems[1432]: Found vda4 May 9 23:29:01.259740 extend-filesystems[1432]: Found vda6 May 9 23:29:01.259740 extend-filesystems[1432]: Found vda7 May 9 23:29:01.259740 extend-filesystems[1432]: Found vda9 May 9 23:29:01.259740 extend-filesystems[1432]: Checking size of /dev/vda9 May 9 23:29:01.260012 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:29:01.262040 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:29:01.272710 jq[1445]: true May 9 23:29:01.275678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:29:01.275949 tar[1448]: linux-arm64/LICENSE May 9 23:29:01.275737 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:29:01.276172 tar[1448]: linux-arm64/helm May 9 23:29:01.277037 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:29:01.277069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:29:01.281181 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:29:01.288153 extend-filesystems[1432]: Resized partition /dev/vda9 May 9 23:29:01.290319 extend-filesystems[1466]: resize2fs 1.47.2 (1-Jan-2025) May 9 23:29:01.295886 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 23:29:01.303909 update_engine[1444]: I20250509 23:29:01.301780 1444 main.cc:92] Flatcar Update Engine starting May 9 23:29:01.308390 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1374) May 9 23:29:01.308269 systemd[1]: Started update-engine.service - Update Engine. May 9 23:29:01.308486 update_engine[1444]: I20250509 23:29:01.308346 1444 update_check_scheduler.cc:74] Next update check in 10m51s May 9 23:29:01.312991 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:29:01.314069 jq[1464]: true May 9 23:29:01.323012 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:29:01.341592 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 23:29:01.323446 systemd-logind[1438]: New seat seat0. May 9 23:29:01.330666 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:29:01.343250 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 23:29:01.343250 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:29:01.343250 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 23:29:01.349554 extend-filesystems[1432]: Resized filesystem in /dev/vda9 May 9 23:29:01.351570 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:29:01.353097 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:29:01.399134 bash[1485]: Updated "/home/core/.ssh/authorized_keys" May 9 23:29:01.400742 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:29:01.402396 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 23:29:01.414199 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:29:01.519330 containerd[1452]: time="2025-05-09T23:29:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 9 23:29:01.521065 containerd[1452]: time="2025-05-09T23:29:01.521021920Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 9 23:29:01.530651 containerd[1452]: time="2025-05-09T23:29:01.530560400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.08µs" May 9 23:29:01.530651 containerd[1452]: time="2025-05-09T23:29:01.530597480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 9 23:29:01.530651 containerd[1452]: time="2025-05-09T23:29:01.530619520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 9 23:29:01.530851 containerd[1452]: time="2025-05-09T23:29:01.530816600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 9 23:29:01.530851 containerd[1452]: time="2025-05-09T23:29:01.530844880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 9 23:29:01.530911 containerd[1452]: time="2025-05-09T23:29:01.530883560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 23:29:01.531024 containerd[1452]: time="2025-05-09T23:29:01.530992680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 23:29:01.531024 containerd[1452]: time="2025-05-09T23:29:01.531015840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 23:29:01.531404 containerd[1452]: time="2025-05-09T23:29:01.531372840Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 23:29:01.531404 containerd[1452]: time="2025-05-09T23:29:01.531398880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 23:29:01.531479 containerd[1452]: time="2025-05-09T23:29:01.531410440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 23:29:01.531507 containerd[1452]: time="2025-05-09T23:29:01.531479160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 9 23:29:01.531585 containerd[1452]: time="2025-05-09T23:29:01.531561680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 9 23:29:01.531848 containerd[1452]: time="2025-05-09T23:29:01.531820040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 23:29:01.531972 containerd[1452]: time="2025-05-09T23:29:01.531886520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 23:29:01.531994 containerd[1452]: time="2025-05-09T23:29:01.531974040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 9 23:29:01.532026 containerd[1452]: time="2025-05-09T23:29:01.532014120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 9 23:29:01.532623 containerd[1452]: time="2025-05-09T23:29:01.532600240Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 9 23:29:01.532855 containerd[1452]: time="2025-05-09T23:29:01.532833600Z" level=info msg="metadata content store policy set" policy=shared May 9 23:29:01.536320 containerd[1452]: time="2025-05-09T23:29:01.536278600Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 9 23:29:01.536367 containerd[1452]: time="2025-05-09T23:29:01.536347440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 9 23:29:01.536367 containerd[1452]: time="2025-05-09T23:29:01.536363200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 9 23:29:01.536405 containerd[1452]: time="2025-05-09T23:29:01.536380400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 9 23:29:01.536405 containerd[1452]: time="2025-05-09T23:29:01.536394680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 9 23:29:01.536437 containerd[1452]: time="2025-05-09T23:29:01.536412200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 9 23:29:01.536437 containerd[1452]: time="2025-05-09T23:29:01.536427480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 9 23:29:01.536468 containerd[1452]: time="2025-05-09T23:29:01.536439600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 9 23:29:01.536468 containerd[1452]: time="2025-05-09T23:29:01.536450960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 9 23:29:01.536468 containerd[1452]: time="2025-05-09T23:29:01.536461720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 9 23:29:01.536519 containerd[1452]: time="2025-05-09T23:29:01.536471520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 9 23:29:01.536519 containerd[1452]: time="2025-05-09T23:29:01.536483240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 9 23:29:01.536731 containerd[1452]: time="2025-05-09T23:29:01.536695560Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 9 23:29:01.536761 containerd[1452]: time="2025-05-09T23:29:01.536730880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 9 23:29:01.536831 containerd[1452]: time="2025-05-09T23:29:01.536756040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 9 23:29:01.536857 containerd[1452]: time="2025-05-09T23:29:01.536832040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 9 23:29:01.536857 containerd[1452]: time="2025-05-09T23:29:01.536847320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 9 23:29:01.536898 containerd[1452]: time="2025-05-09T23:29:01.536858000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 9 23:29:01.536898 containerd[1452]: time="2025-05-09T23:29:01.536886360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 9 23:29:01.536943 containerd[1452]: time="2025-05-09T23:29:01.536898480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 9 23:29:01.536943 containerd[1452]: time="2025-05-09T23:29:01.536910400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 9 23:29:01.536943 containerd[1452]: time="2025-05-09T23:29:01.536920960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 9 23:29:01.536943 containerd[1452]: time="2025-05-09T23:29:01.536931200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 9 23:29:01.537330 containerd[1452]: time="2025-05-09T23:29:01.537292200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 9 23:29:01.537330 containerd[1452]: time="2025-05-09T23:29:01.537327320Z" level=info msg="Start snapshots syncer" May 9 23:29:01.537427 containerd[1452]: time="2025-05-09T23:29:01.537408960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 9 23:29:01.537926 containerd[1452]: time="2025-05-09T23:29:01.537884600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 9 23:29:01.538020 containerd[1452]: time="2025-05-09T23:29:01.537955000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 9 23:29:01.538188 containerd[1452]: time="2025-05-09T23:29:01.538166920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 9 23:29:01.538356 containerd[1452]: time="2025-05-09T23:29:01.538323760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 9 23:29:01.538396 containerd[1452]: time="2025-05-09T23:29:01.538355240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 9 23:29:01.538396 containerd[1452]: time="2025-05-09T23:29:01.538367360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 9 23:29:01.538396 containerd[1452]: time="2025-05-09T23:29:01.538377680Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 9 23:29:01.538396 containerd[1452]: time="2025-05-09T23:29:01.538390880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 9 23:29:01.538460 containerd[1452]: time="2025-05-09T23:29:01.538401320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 9 23:29:01.538460 containerd[1452]: time="2025-05-09T23:29:01.538412040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 9 23:29:01.538460 containerd[1452]: time="2025-05-09T23:29:01.538446560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 9 23:29:01.538519 containerd[1452]: time="2025-05-09T23:29:01.538459160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 9 23:29:01.538519 containerd[1452]: time="2025-05-09T23:29:01.538469000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 9 23:29:01.538519 containerd[1452]: time="2025-05-09T23:29:01.538507200Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 23:29:01.538568 containerd[1452]: time="2025-05-09T23:29:01.538525200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 23:29:01.538568 containerd[1452]: time="2025-05-09T23:29:01.538534640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 23:29:01.538568 containerd[1452]: time="2025-05-09T23:29:01.538555320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 23:29:01.538568 containerd[1452]: time="2025-05-09T23:29:01.538563720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 9 23:29:01.540084 containerd[1452]: time="2025-05-09T23:29:01.539735640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 9 23:29:01.540084 containerd[1452]: time="2025-05-09T23:29:01.539983720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 9 23:29:01.540084 containerd[1452]: time="2025-05-09T23:29:01.540071760Z" level=info msg="runtime interface created" May 9 23:29:01.540170 containerd[1452]: time="2025-05-09T23:29:01.540128360Z" level=info msg="created NRI interface" May 9 23:29:01.540170 containerd[1452]: time="2025-05-09T23:29:01.540150560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 9 23:29:01.540170 containerd[1452]: time="2025-05-09T23:29:01.540168040Z" level=info msg="Connect containerd service" May 9 23:29:01.540237 containerd[1452]: time="2025-05-09T23:29:01.540202520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:29:01.541187 containerd[1452]: time="2025-05-09T23:29:01.541158960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:29:01.643955 containerd[1452]: time="2025-05-09T23:29:01.643885160Z" level=info msg="Start subscribing containerd event" May 9 23:29:01.643955 containerd[1452]: time="2025-05-09T23:29:01.643960200Z" level=info msg="Start recovering state" May 9 23:29:01.644072 containerd[1452]: time="2025-05-09T23:29:01.644050040Z" level=info msg="Start event monitor" May 9 23:29:01.644091 containerd[1452]: time="2025-05-09T23:29:01.644072400Z" level=info msg="Start cni network conf syncer for default" May 9 23:29:01.644091 containerd[1452]: time="2025-05-09T23:29:01.644081880Z" level=info msg="Start streaming server" May 9 23:29:01.644147 containerd[1452]: time="2025-05-09T23:29:01.644091120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 9 23:29:01.644147 containerd[1452]: time="2025-05-09T23:29:01.644098440Z" level=info msg="runtime interface starting up..." May 9 23:29:01.644147 containerd[1452]: time="2025-05-09T23:29:01.644104240Z" level=info msg="starting plugins..." May 9 23:29:01.644147 containerd[1452]: time="2025-05-09T23:29:01.644117960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 9 23:29:01.644778 containerd[1452]: time="2025-05-09T23:29:01.644745360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:29:01.644895 containerd[1452]: time="2025-05-09T23:29:01.644810360Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:29:01.647627 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:29:01.647922 containerd[1452]: time="2025-05-09T23:29:01.647874520Z" level=info msg="containerd successfully booted in 0.128560s" May 9 23:29:01.715931 tar[1448]: linux-arm64/README.md May 9 23:29:01.735231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:29:01.927045 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:29:01.946006 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:29:01.948429 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:29:01.967777 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:29:01.968087 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:29:01.971288 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:29:01.993934 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:29:01.996521 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:29:01.998616 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 9 23:29:01.999992 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:29:02.698035 systemd-networkd[1391]: eth0: Gained IPv6LL May 9 23:29:02.700558 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:29:02.703172 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:29:02.705233 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 23:29:02.707170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:02.708875 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:29:02.731196 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:29:02.732617 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 23:29:02.732856 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 23:29:02.735189 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:29:03.215481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:03.216807 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:29:03.218476 systemd[1]: Startup finished in 529ms (kernel) + 5.744s (initrd) + 3.813s (userspace) = 10.087s. May 9 23:29:03.219224 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:29:03.607403 kubelet[1556]: E0509 23:29:03.607271 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:29:03.609455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:29:03.609604 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:29:03.609896 systemd[1]: kubelet.service: Consumed 784ms CPU time, 250.4M memory peak. May 9 23:29:06.555227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:29:06.556370 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). May 9 23:29:06.654662 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:06.656131 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:06.668946 systemd-logind[1438]: New session 1 of user core. May 9 23:29:06.669874 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:29:06.670849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:29:06.698132 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:29:06.700297 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:29:06.722172 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:29:06.724498 systemd-logind[1438]: New session c1 of user core. May 9 23:29:06.837628 systemd[1573]: Queued start job for default target default.target. May 9 23:29:06.849881 systemd[1573]: Created slice app.slice - User Application Slice. May 9 23:29:06.849912 systemd[1573]: Reached target paths.target - Paths. May 9 23:29:06.849951 systemd[1573]: Reached target timers.target - Timers. May 9 23:29:06.851202 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:29:06.860141 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:29:06.860213 systemd[1573]: Reached target sockets.target - Sockets. May 9 23:29:06.860252 systemd[1573]: Reached target basic.target - Basic System. May 9 23:29:06.860284 systemd[1573]: Reached target default.target - Main User Target. May 9 23:29:06.860310 systemd[1573]: Startup finished in 130ms. May 9 23:29:06.860565 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:29:06.861906 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:29:06.921304 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:53948.service - OpenSSH per-connection server daemon (10.0.0.1:53948). May 9 23:29:06.968105 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53948 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:06.969220 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:06.973280 systemd-logind[1438]: New session 2 of user core. May 9 23:29:06.984018 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:29:07.033891 sshd[1586]: Connection closed by 10.0.0.1 port 53948 May 9 23:29:07.034354 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 9 23:29:07.046284 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:53948.service: Deactivated successfully. May 9 23:29:07.049380 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:29:07.053941 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. May 9 23:29:07.055827 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:53964.service - OpenSSH per-connection server daemon (10.0.0.1:53964). May 9 23:29:07.056596 systemd-logind[1438]: Removed session 2. May 9 23:29:07.112600 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53964 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:07.113651 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:07.117699 systemd-logind[1438]: New session 3 of user core. May 9 23:29:07.139035 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:29:07.186771 sshd[1594]: Connection closed by 10.0.0.1 port 53964 May 9 23:29:07.187078 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 9 23:29:07.196913 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:53964.service: Deactivated successfully. May 9 23:29:07.199220 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:29:07.200461 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. May 9 23:29:07.202196 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:53970.service - OpenSSH per-connection server daemon (10.0.0.1:53970). May 9 23:29:07.202894 systemd-logind[1438]: Removed session 3. May 9 23:29:07.249253 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 53970 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:07.250408 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:07.254831 systemd-logind[1438]: New session 4 of user core. May 9 23:29:07.269011 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:29:07.319456 sshd[1602]: Connection closed by 10.0.0.1 port 53970 May 9 23:29:07.319786 sshd-session[1599]: pam_unix(sshd:session): session closed for user core May 9 23:29:07.339912 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:53970.service: Deactivated successfully. May 9 23:29:07.341363 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:29:07.342644 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. May 9 23:29:07.344421 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:53984.service - OpenSSH per-connection server daemon (10.0.0.1:53984). May 9 23:29:07.345168 systemd-logind[1438]: Removed session 4. May 9 23:29:07.400522 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 53984 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:07.401887 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:07.405906 systemd-logind[1438]: New session 5 of user core. May 9 23:29:07.415008 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:29:07.482804 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:29:07.483148 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:29:07.494598 sudo[1611]: pam_unix(sudo:session): session closed for user root May 9 23:29:07.496055 sshd[1610]: Connection closed by 10.0.0.1 port 53984 May 9 23:29:07.496546 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 9 23:29:07.505969 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:53984.service: Deactivated successfully. May 9 23:29:07.507384 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:29:07.508359 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. May 9 23:29:07.510247 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). May 9 23:29:07.510915 systemd-logind[1438]: Removed session 5. May 9 23:29:07.563487 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:07.564900 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:07.568428 systemd-logind[1438]: New session 6 of user core. May 9 23:29:07.582061 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:29:07.631154 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:29:07.631428 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:29:07.634328 sudo[1621]: pam_unix(sudo:session): session closed for user root May 9 23:29:07.638669 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 23:29:07.639165 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:29:07.646830 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 23:29:07.681351 augenrules[1643]: No rules May 9 23:29:07.682537 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:29:07.682734 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 23:29:07.683911 sudo[1620]: pam_unix(sudo:session): session closed for user root May 9 23:29:07.685152 sshd[1619]: Connection closed by 10.0.0.1 port 53996 May 9 23:29:07.685578 sshd-session[1616]: pam_unix(sshd:session): session closed for user core May 9 23:29:07.698972 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:53996.service: Deactivated successfully. May 9 23:29:07.700321 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:29:07.702089 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. May 9 23:29:07.703677 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:54002.service - OpenSSH per-connection server daemon (10.0.0.1:54002). May 9 23:29:07.705248 systemd-logind[1438]: Removed session 6. May 9 23:29:07.755334 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 54002 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:07.756591 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:07.760876 systemd-logind[1438]: New session 7 of user core. May 9 23:29:07.771023 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:29:07.821094 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:29:07.821360 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:29:08.156017 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:29:08.166288 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:29:08.406849 dockerd[1675]: time="2025-05-09T23:29:08.406720788Z" level=info msg="Starting up" May 9 23:29:08.409001 dockerd[1675]: time="2025-05-09T23:29:08.408956612Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 9 23:29:08.514874 dockerd[1675]: time="2025-05-09T23:29:08.514819564Z" level=info msg="Loading containers: start." May 9 23:29:08.662881 kernel: Initializing XFRM netlink socket May 9 23:29:08.732742 systemd-networkd[1391]: docker0: Link UP May 9 23:29:08.801268 dockerd[1675]: time="2025-05-09T23:29:08.801230704Z" level=info msg="Loading containers: done." May 9 23:29:08.823795 dockerd[1675]: time="2025-05-09T23:29:08.823743854Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:29:08.823955 dockerd[1675]: time="2025-05-09T23:29:08.823826711Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 9 23:29:08.824084 dockerd[1675]: time="2025-05-09T23:29:08.824050432Z" level=info msg="Daemon has completed initialization" May 9 23:29:08.855608 dockerd[1675]: time="2025-05-09T23:29:08.855490757Z" level=info msg="API listen on /run/docker.sock" May 9 23:29:08.855740 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:29:09.593046 containerd[1452]: time="2025-05-09T23:29:09.590972235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 23:29:10.241174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275248267.mount: Deactivated successfully. May 9 23:29:11.667508 containerd[1452]: time="2025-05-09T23:29:11.667452826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:11.668880 containerd[1452]: time="2025-05-09T23:29:11.667910552Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 9 23:29:11.668880 containerd[1452]: time="2025-05-09T23:29:11.668834033Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:11.672164 containerd[1452]: time="2025-05-09T23:29:11.672103373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:11.672977 containerd[1452]: time="2025-05-09T23:29:11.672942380Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.081927549s" May 9 23:29:11.673176 containerd[1452]: time="2025-05-09T23:29:11.673046312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 9 23:29:11.673644 containerd[1452]: time="2025-05-09T23:29:11.673620942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 23:29:13.227564 containerd[1452]: time="2025-05-09T23:29:13.227517199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:13.228500 containerd[1452]: time="2025-05-09T23:29:13.228254997Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 9 23:29:13.229162 containerd[1452]: time="2025-05-09T23:29:13.229121393Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:13.231442 containerd[1452]: time="2025-05-09T23:29:13.231389829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:13.232496 containerd[1452]: time="2025-05-09T23:29:13.232360792Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.558708667s" May 9 23:29:13.232496 containerd[1452]: time="2025-05-09T23:29:13.232399970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 9 23:29:13.233130 containerd[1452]: time="2025-05-09T23:29:13.232965354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 23:29:13.703948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:29:13.705394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:13.854833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:13.858186 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:29:13.896655 kubelet[1949]: E0509 23:29:13.896589 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:29:13.898901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:29:13.899027 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:29:13.899278 systemd[1]: kubelet.service: Consumed 141ms CPU time, 104.8M memory peak. May 9 23:29:14.886737 containerd[1452]: time="2025-05-09T23:29:14.886687108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:14.887785 containerd[1452]: time="2025-05-09T23:29:14.887727211Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 9 23:29:14.888848 containerd[1452]: time="2025-05-09T23:29:14.888820747Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:14.892748 containerd[1452]: time="2025-05-09T23:29:14.892696372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:14.893461 containerd[1452]: time="2025-05-09T23:29:14.893325928Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.660327852s" May 9 23:29:14.893461 containerd[1452]: time="2025-05-09T23:29:14.893359274Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 9 23:29:14.893824 containerd[1452]: time="2025-05-09T23:29:14.893798800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 23:29:16.034210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809161220.mount: Deactivated successfully. May 9 23:29:16.290451 containerd[1452]: time="2025-05-09T23:29:16.290341352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:16.291328 containerd[1452]: time="2025-05-09T23:29:16.291267232Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 9 23:29:16.292035 containerd[1452]: time="2025-05-09T23:29:16.292008271Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:16.294641 containerd[1452]: time="2025-05-09T23:29:16.294604172Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.400772884s" May 9 23:29:16.294692 containerd[1452]: time="2025-05-09T23:29:16.294647678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 9 23:29:16.296806 containerd[1452]: time="2025-05-09T23:29:16.296742461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 23:29:16.297275 containerd[1452]: time="2025-05-09T23:29:16.296971486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:16.833769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114343387.mount: Deactivated successfully. May 9 23:29:17.871387 containerd[1452]: time="2025-05-09T23:29:17.871018054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:17.871783 containerd[1452]: time="2025-05-09T23:29:17.871729705Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 9 23:29:17.872266 containerd[1452]: time="2025-05-09T23:29:17.872241694Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:17.874780 containerd[1452]: time="2025-05-09T23:29:17.874726026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:17.875948 containerd[1452]: time="2025-05-09T23:29:17.875908795Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.579134194s" May 9 23:29:17.875948 containerd[1452]: time="2025-05-09T23:29:17.875945806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 9 23:29:17.876541 containerd[1452]: time="2025-05-09T23:29:17.876361327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 23:29:18.337507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575446001.mount: Deactivated successfully. May 9 23:29:18.342708 containerd[1452]: time="2025-05-09T23:29:18.342662826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:29:18.343166 containerd[1452]: time="2025-05-09T23:29:18.343108519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 9 23:29:18.344050 containerd[1452]: time="2025-05-09T23:29:18.344013245Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:29:18.346080 containerd[1452]: time="2025-05-09T23:29:18.346032800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:29:18.346639 containerd[1452]: time="2025-05-09T23:29:18.346605168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 470.21546ms" May 9 23:29:18.346667 containerd[1452]: time="2025-05-09T23:29:18.346636229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 23:29:18.347216 containerd[1452]: time="2025-05-09T23:29:18.347119753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 23:29:18.924561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062958317.mount: Deactivated successfully. May 9 23:29:21.709718 containerd[1452]: time="2025-05-09T23:29:21.709667678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:21.711184 containerd[1452]: time="2025-05-09T23:29:21.711122299Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 9 23:29:21.712077 containerd[1452]: time="2025-05-09T23:29:21.712024648Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:21.714998 containerd[1452]: time="2025-05-09T23:29:21.714946652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:21.716166 containerd[1452]: time="2025-05-09T23:29:21.716128846Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.368903831s" May 9 23:29:21.716211 containerd[1452]: time="2025-05-09T23:29:21.716165377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 9 23:29:23.954031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 23:29:23.955476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:24.102125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:24.105438 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:29:24.145026 kubelet[2110]: E0509 23:29:24.144970 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:29:24.147049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:29:24.147173 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:29:24.147764 systemd[1]: kubelet.service: Consumed 131ms CPU time, 102.7M memory peak. May 9 23:29:25.796410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:25.796653 systemd[1]: kubelet.service: Consumed 131ms CPU time, 102.7M memory peak. May 9 23:29:25.798511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:25.822904 systemd[1]: Reload requested from client PID 2125 ('systemctl') (unit session-7.scope)... May 9 23:29:25.822917 systemd[1]: Reloading... May 9 23:29:25.891889 zram_generator::config[2171]: No configuration found. May 9 23:29:26.047693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:29:26.118921 systemd[1]: Reloading finished in 295 ms. May 9 23:29:26.165605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:26.168362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:26.170498 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:29:26.170699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:26.170748 systemd[1]: kubelet.service: Consumed 90ms CPU time, 90.2M memory peak. May 9 23:29:26.172188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:26.293727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:26.296980 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:29:26.330779 kubelet[2216]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:29:26.330779 kubelet[2216]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:29:26.330779 kubelet[2216]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:29:26.331106 kubelet[2216]: I0509 23:29:26.330792 2216 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:29:26.995886 kubelet[2216]: I0509 23:29:26.995626 2216 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:29:26.995886 kubelet[2216]: I0509 23:29:26.995659 2216 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:29:26.996078 kubelet[2216]: I0509 23:29:26.995936 2216 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:29:27.027428 kubelet[2216]: E0509 23:29:27.027385 2216 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" May 9 23:29:27.029188 kubelet[2216]: I0509 23:29:27.029076 2216 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:29:27.039190 kubelet[2216]: I0509 23:29:27.039126 2216 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 9 23:29:27.041784 kubelet[2216]: I0509 23:29:27.041757 2216 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:29:27.042946 kubelet[2216]: I0509 23:29:27.042887 2216 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:29:27.043110 kubelet[2216]: I0509 23:29:27.042933 2216 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:29:27.043199 kubelet[2216]: I0509 23:29:27.043172 2216 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:29:27.043199 kubelet[2216]: I0509 23:29:27.043182 2216 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:29:27.043400 kubelet[2216]: I0509 23:29:27.043370 2216 state_mem.go:36] "Initialized new in-memory state store" May 9 23:29:27.057350 kubelet[2216]: I0509 23:29:27.057310 2216 kubelet.go:446] "Attempting to sync node with API server" May 9 23:29:27.057350 kubelet[2216]: I0509 23:29:27.057340 2216 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:29:27.057450 kubelet[2216]: I0509 23:29:27.057365 2216 kubelet.go:352] "Adding apiserver pod source" May 9 23:29:27.057450 kubelet[2216]: I0509 23:29:27.057387 2216 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:29:27.060108 kubelet[2216]: W0509 23:29:27.059935 2216 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused May 9 23:29:27.060108 kubelet[2216]: E0509 23:29:27.059997 2216 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" May 9 23:29:27.060244 kubelet[2216]: W0509 23:29:27.060213 2216 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused May 9 23:29:27.060272 kubelet[2216]: E0509 23:29:27.060257 2216 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" May 9 23:29:27.065249 kubelet[2216]: I0509 23:29:27.064538 2216 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 9 23:29:27.065249 kubelet[2216]: I0509 23:29:27.065118 2216 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:29:27.067061 kubelet[2216]: W0509 23:29:27.067037 2216 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:29:27.071496 kubelet[2216]: I0509 23:29:27.071461 2216 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:29:27.071563 kubelet[2216]: I0509 23:29:27.071502 2216 server.go:1287] "Started kubelet" May 9 23:29:27.071947 kubelet[2216]: I0509 23:29:27.071623 2216 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:29:27.072723 kubelet[2216]: I0509 23:29:27.072701 2216 server.go:490] "Adding debug handlers to kubelet server" May 9 23:29:27.072936 kubelet[2216]: I0509 23:29:27.072879 2216 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:29:27.073149 kubelet[2216]: I0509 23:29:27.073122 2216 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:29:27.074555 kubelet[2216]: I0509 23:29:27.074530 2216 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:29:27.074632 kubelet[2216]: I0509 23:29:27.074573 2216 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:29:27.075169 kubelet[2216]: I0509 23:29:27.074885 2216 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:29:27.075284 kubelet[2216]: E0509 23:29:27.075252 2216 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:29:27.075997 kubelet[2216]: I0509 23:29:27.075976 2216 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:29:27.076062 kubelet[2216]: I0509 23:29:27.076038 2216 reconciler.go:26] "Reconciler: start to sync state" May 9 23:29:27.076677 kubelet[2216]: E0509 23:29:27.076614 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" May 9 23:29:27.076726 kubelet[2216]: W0509 23:29:27.076685 2216 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused May 9 23:29:27.076726 kubelet[2216]: E0509 23:29:27.076718 2216 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" May 9 23:29:27.081521 kubelet[2216]: I0509 23:29:27.078374 2216 factory.go:221] Registration of the systemd container factory successfully May 9 23:29:27.081521 kubelet[2216]: I0509 23:29:27.078454 2216 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:29:27.084182 kubelet[2216]: I0509 23:29:27.083584 2216 factory.go:221] Registration of the containerd container factory successfully May 9 23:29:27.090825 kubelet[2216]: E0509 23:29:27.081957 2216 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183dffb136185843 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 23:29:27.071479875 +0000 UTC m=+0.771802122,LastTimestamp:2025-05-09 23:29:27.071479875 +0000 UTC m=+0.771802122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 23:29:27.092076 kubelet[2216]: I0509 23:29:27.092030 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:29:27.093194 kubelet[2216]: I0509 23:29:27.093161 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:29:27.093194 kubelet[2216]: I0509 23:29:27.093189 2216 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:29:27.093450 kubelet[2216]: I0509 23:29:27.093204 2216 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:29:27.093450 kubelet[2216]: I0509 23:29:27.093211 2216 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:29:27.093450 kubelet[2216]: E0509 23:29:27.093247 2216 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:29:27.093663 kubelet[2216]: W0509 23:29:27.093624 2216 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused May 9 23:29:27.093705 kubelet[2216]: E0509 23:29:27.093667 2216 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" May 9 23:29:27.101181 kubelet[2216]: I0509 23:29:27.101152 2216 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:29:27.101181 kubelet[2216]: I0509 23:29:27.101176 2216 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:29:27.101285 kubelet[2216]: I0509 23:29:27.101196 2216 state_mem.go:36] "Initialized new in-memory state store" May 9 23:29:27.178719 kubelet[2216]: E0509 23:29:27.178673 2216 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:29:27.193823 kubelet[2216]: E0509 23:29:27.193799 2216 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 23:29:27.208418 kubelet[2216]: I0509 23:29:27.208384 2216 policy_none.go:49] "None policy: Start" May 9 23:29:27.208418 kubelet[2216]: I0509 23:29:27.208413 2216 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:29:27.208518 kubelet[2216]: I0509 23:29:27.208430 2216 state_mem.go:35] "Initializing new in-memory state store" May 9 23:29:27.213826 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:29:27.226555 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:29:27.229557 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:29:27.242332 kubelet[2216]: I0509 23:29:27.241743 2216 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:29:27.242332 kubelet[2216]: I0509 23:29:27.241960 2216 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:29:27.242332 kubelet[2216]: I0509 23:29:27.241974 2216 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:29:27.242332 kubelet[2216]: I0509 23:29:27.242245 2216 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:29:27.243757 kubelet[2216]: E0509 23:29:27.243732 2216 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:29:27.243884 kubelet[2216]: E0509 23:29:27.243850 2216 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 23:29:27.277112 kubelet[2216]: E0509 23:29:27.277016 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" May 9 23:29:27.343553 kubelet[2216]: I0509 23:29:27.343511 2216 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:29:27.345735 kubelet[2216]: E0509 23:29:27.345696 2216 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" May 9 23:29:27.401132 systemd[1]: Created slice kubepods-burstable-pod8f4ff5c1784e445885a8642f685fee2d.slice - libcontainer container kubepods-burstable-pod8f4ff5c1784e445885a8642f685fee2d.slice. May 9 23:29:27.418459 kubelet[2216]: E0509 23:29:27.418259 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:27.421533 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 9 23:29:27.423448 kubelet[2216]: E0509 23:29:27.423427 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:27.424766 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 9 23:29:27.426413 kubelet[2216]: E0509 23:29:27.426393 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:27.477601 kubelet[2216]: I0509 23:29:27.477579 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ff5c1784e445885a8642f685fee2d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f4ff5c1784e445885a8642f685fee2d\") " pod="kube-system/kube-apiserver-localhost" May 9 23:29:27.477749 kubelet[2216]: I0509 23:29:27.477611 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:27.477749 kubelet[2216]: I0509 23:29:27.477633 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:27.477749 kubelet[2216]: I0509 23:29:27.477651 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:27.477749 kubelet[2216]: I0509 23:29:27.477674 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 23:29:27.477749 kubelet[2216]: I0509 23:29:27.477690 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f4ff5c1784e445885a8642f685fee2d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f4ff5c1784e445885a8642f685fee2d\") " pod="kube-system/kube-apiserver-localhost" May 9 23:29:27.477908 kubelet[2216]: I0509 23:29:27.477748 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:27.477908 kubelet[2216]: I0509 23:29:27.477786 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:27.477908 kubelet[2216]: I0509 23:29:27.477820 2216 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ff5c1784e445885a8642f685fee2d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f4ff5c1784e445885a8642f685fee2d\") " pod="kube-system/kube-apiserver-localhost" May 9 23:29:27.547923 kubelet[2216]: I0509 23:29:27.547496 2216 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:29:27.547923 kubelet[2216]: E0509 23:29:27.547767 2216 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" May 9 23:29:27.678202 kubelet[2216]: E0509 23:29:27.678164 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" May 9 23:29:27.719482 kubelet[2216]: E0509 23:29:27.719447 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:27.721990 containerd[1452]: time="2025-05-09T23:29:27.721942359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f4ff5c1784e445885a8642f685fee2d,Namespace:kube-system,Attempt:0,}" May 9 23:29:27.724715 kubelet[2216]: E0509 23:29:27.724515 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:27.724834 containerd[1452]: time="2025-05-09T23:29:27.724800460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 9 23:29:27.727199 kubelet[2216]: E0509 23:29:27.727159 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:27.727599 containerd[1452]: time="2025-05-09T23:29:27.727447202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 9 23:29:27.742113 containerd[1452]: time="2025-05-09T23:29:27.742055998Z" level=info msg="connecting to shim 878a4cc5e88b4cd91187a3da8045ced081175b681b85286885f640a16899d338" address="unix:///run/containerd/s/52bfd746cdbc3a7704d14d62edf0ffda5a0bde28b8d961351d7b3ba6b280093a" namespace=k8s.io protocol=ttrpc version=3 May 9 23:29:27.752325 containerd[1452]: time="2025-05-09T23:29:27.752214622Z" level=info msg="connecting to shim b17dd92d0ef3bd83d66688ffd265a37b12b1176be7042e23587bf941dc4f6663" address="unix:///run/containerd/s/4736866ff95d2a4ccf3bd9d16fc5b4417d45b174f1b74d0046ed4cdd1aee0598" namespace=k8s.io protocol=ttrpc version=3 May 9 23:29:27.761387 containerd[1452]: time="2025-05-09T23:29:27.761337230Z" level=info msg="connecting to shim 4cdb8a5a8a335c5f234aef2639852ea172cc9c854e2d0a873ddb2be638b76dd5" address="unix:///run/containerd/s/342e9dfdbe96319bec25d860b331811ab3dd5e3160b6ed50894b833a292e6699" namespace=k8s.io protocol=ttrpc version=3 May 9 23:29:27.778039 systemd[1]: Started cri-containerd-878a4cc5e88b4cd91187a3da8045ced081175b681b85286885f640a16899d338.scope - libcontainer container 878a4cc5e88b4cd91187a3da8045ced081175b681b85286885f640a16899d338. May 9 23:29:27.779454 systemd[1]: Started cri-containerd-b17dd92d0ef3bd83d66688ffd265a37b12b1176be7042e23587bf941dc4f6663.scope - libcontainer container b17dd92d0ef3bd83d66688ffd265a37b12b1176be7042e23587bf941dc4f6663. May 9 23:29:27.783061 systemd[1]: Started cri-containerd-4cdb8a5a8a335c5f234aef2639852ea172cc9c854e2d0a873ddb2be638b76dd5.scope - libcontainer container 4cdb8a5a8a335c5f234aef2639852ea172cc9c854e2d0a873ddb2be638b76dd5. May 9 23:29:27.819767 containerd[1452]: time="2025-05-09T23:29:27.819592694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8f4ff5c1784e445885a8642f685fee2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"878a4cc5e88b4cd91187a3da8045ced081175b681b85286885f640a16899d338\"" May 9 23:29:27.823071 kubelet[2216]: E0509 23:29:27.823030 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:27.825439 containerd[1452]: time="2025-05-09T23:29:27.825383275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cdb8a5a8a335c5f234aef2639852ea172cc9c854e2d0a873ddb2be638b76dd5\"" May 9 23:29:27.825748 containerd[1452]: time="2025-05-09T23:29:27.825699892Z" level=info msg="CreateContainer within sandbox \"878a4cc5e88b4cd91187a3da8045ced081175b681b85286885f640a16899d338\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:29:27.826653 kubelet[2216]: E0509 23:29:27.826630 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:27.829135 containerd[1452]: time="2025-05-09T23:29:27.828837898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"b17dd92d0ef3bd83d66688ffd265a37b12b1176be7042e23587bf941dc4f6663\"" May 9 23:29:27.829375 containerd[1452]: time="2025-05-09T23:29:27.829301239Z" level=info msg="CreateContainer within sandbox \"4cdb8a5a8a335c5f234aef2639852ea172cc9c854e2d0a873ddb2be638b76dd5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:29:27.829793 kubelet[2216]: E0509 23:29:27.829710 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:27.831835 containerd[1452]: time="2025-05-09T23:29:27.831802416Z" level=info msg="CreateContainer within sandbox \"b17dd92d0ef3bd83d66688ffd265a37b12b1176be7042e23587bf941dc4f6663\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:29:27.837551 containerd[1452]: time="2025-05-09T23:29:27.837518816Z" level=info msg="Container d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:27.839223 containerd[1452]: time="2025-05-09T23:29:27.839196614Z" level=info msg="Container b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:27.843881 containerd[1452]: time="2025-05-09T23:29:27.843840767Z" level=info msg="Container a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:27.847934 containerd[1452]: time="2025-05-09T23:29:27.847817333Z" level=info msg="CreateContainer within sandbox \"4cdb8a5a8a335c5f234aef2639852ea172cc9c854e2d0a873ddb2be638b76dd5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c\"" May 9 23:29:27.848386 containerd[1452]: time="2025-05-09T23:29:27.848342271Z" level=info msg="StartContainer for \"d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c\"" May 9 23:29:27.848963 containerd[1452]: time="2025-05-09T23:29:27.848930605Z" level=info msg="CreateContainer within sandbox \"878a4cc5e88b4cd91187a3da8045ced081175b681b85286885f640a16899d338\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58\"" May 9 23:29:27.849299 containerd[1452]: time="2025-05-09T23:29:27.849256849Z" level=info msg="StartContainer for \"b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58\"" May 9 23:29:27.849607 containerd[1452]: time="2025-05-09T23:29:27.849348007Z" level=info msg="connecting to shim d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c" address="unix:///run/containerd/s/342e9dfdbe96319bec25d860b331811ab3dd5e3160b6ed50894b833a292e6699" protocol=ttrpc version=3 May 9 23:29:27.850716 containerd[1452]: time="2025-05-09T23:29:27.850683662Z" level=info msg="connecting to shim b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58" address="unix:///run/containerd/s/52bfd746cdbc3a7704d14d62edf0ffda5a0bde28b8d961351d7b3ba6b280093a" protocol=ttrpc version=3 May 9 23:29:27.851737 containerd[1452]: time="2025-05-09T23:29:27.851705736Z" level=info msg="CreateContainer within sandbox \"b17dd92d0ef3bd83d66688ffd265a37b12b1176be7042e23587bf941dc4f6663\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49\"" May 9 23:29:27.852107 containerd[1452]: time="2025-05-09T23:29:27.852081674Z" level=info msg="StartContainer for \"a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49\"" May 9 23:29:27.853084 containerd[1452]: time="2025-05-09T23:29:27.853051657Z" level=info msg="connecting to shim a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49" address="unix:///run/containerd/s/4736866ff95d2a4ccf3bd9d16fc5b4417d45b174f1b74d0046ed4cdd1aee0598" protocol=ttrpc version=3 May 9 23:29:27.871121 systemd[1]: Started cri-containerd-b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58.scope - libcontainer container b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58. May 9 23:29:27.874656 systemd[1]: Started cri-containerd-a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49.scope - libcontainer container a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49. May 9 23:29:27.875761 systemd[1]: Started cri-containerd-d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c.scope - libcontainer container d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c. May 9 23:29:27.927616 containerd[1452]: time="2025-05-09T23:29:27.925724893Z" level=info msg="StartContainer for \"d81b3172d1313f2ceb4453b7e247d2b6fda5ada79bf0fd36869efc48cef5150c\" returns successfully" May 9 23:29:27.927616 containerd[1452]: time="2025-05-09T23:29:27.926445889Z" level=info msg="StartContainer for \"b599c0004955c7c9e9462ac6c9dc18be4e121ea8b4e44024ff699bb28186ac58\" returns successfully" May 9 23:29:27.942988 containerd[1452]: time="2025-05-09T23:29:27.940014316Z" level=info msg="StartContainer for \"a7b3c674217435fd58a8e9550566a7d712806dd90c110b6e307d25631bebdd49\" returns successfully" May 9 23:29:27.952039 kubelet[2216]: I0509 23:29:27.949488 2216 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:29:27.952039 kubelet[2216]: E0509 23:29:27.949919 2216 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" May 9 23:29:28.106111 kubelet[2216]: E0509 23:29:28.105358 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:28.107246 kubelet[2216]: E0509 23:29:28.106432 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:28.107771 kubelet[2216]: E0509 23:29:28.107752 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:28.108013 kubelet[2216]: E0509 23:29:28.107821 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:28.109449 kubelet[2216]: E0509 23:29:28.109430 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:28.109761 kubelet[2216]: E0509 23:29:28.109701 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:28.751519 kubelet[2216]: I0509 23:29:28.751453 2216 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:29:29.111697 kubelet[2216]: E0509 23:29:29.111586 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:29.111797 kubelet[2216]: E0509 23:29:29.111715 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:29.112471 kubelet[2216]: E0509 23:29:29.112449 2216 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 23:29:29.112587 kubelet[2216]: E0509 23:29:29.112556 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:29.239323 kubelet[2216]: E0509 23:29:29.239254 2216 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 23:29:29.304895 kubelet[2216]: I0509 23:29:29.304853 2216 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 23:29:29.307233 kubelet[2216]: E0509 23:29:29.305047 2216 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 23:29:29.378924 kubelet[2216]: I0509 23:29:29.377901 2216 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 23:29:29.384413 kubelet[2216]: E0509 23:29:29.384359 2216 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 9 23:29:29.384413 kubelet[2216]: I0509 23:29:29.384412 2216 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:29:29.386026 kubelet[2216]: E0509 23:29:29.385925 2216 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 9 23:29:29.386026 kubelet[2216]: I0509 23:29:29.385950 2216 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 23:29:29.387493 kubelet[2216]: E0509 23:29:29.387459 2216 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 9 23:29:30.061463 kubelet[2216]: I0509 23:29:30.061410 2216 apiserver.go:52] "Watching apiserver" May 9 23:29:30.076282 kubelet[2216]: I0509 23:29:30.076244 2216 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:29:31.251677 systemd[1]: Reload requested from client PID 2490 ('systemctl') (unit session-7.scope)... May 9 23:29:31.251695 systemd[1]: Reloading... May 9 23:29:31.301834 kubelet[2216]: I0509 23:29:31.301808 2216 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:29:31.308245 kubelet[2216]: E0509 23:29:31.308218 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:31.325898 zram_generator::config[2537]: No configuration found. May 9 23:29:31.404093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:29:31.410081 kubelet[2216]: I0509 23:29:31.409962 2216 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 23:29:31.414066 kubelet[2216]: E0509 23:29:31.414039 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:31.492985 systemd[1]: Reloading finished in 240 ms. May 9 23:29:31.512581 kubelet[2216]: I0509 23:29:31.512470 2216 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:29:31.512606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:31.535790 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:29:31.536075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:31.536135 systemd[1]: kubelet.service: Consumed 1.209s CPU time, 123.8M memory peak. May 9 23:29:31.537908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:29:31.659361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:29:31.668173 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:29:31.712995 kubelet[2576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:29:31.712995 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:29:31.712995 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:29:31.713320 kubelet[2576]: I0509 23:29:31.713123 2576 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:29:31.719191 kubelet[2576]: I0509 23:29:31.719149 2576 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:29:31.719281 kubelet[2576]: I0509 23:29:31.719271 2576 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:29:31.719561 kubelet[2576]: I0509 23:29:31.719541 2576 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:29:31.721098 kubelet[2576]: I0509 23:29:31.721074 2576 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:29:31.723736 kubelet[2576]: I0509 23:29:31.723707 2576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:29:31.729651 kubelet[2576]: I0509 23:29:31.729632 2576 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 9 23:29:31.734063 kubelet[2576]: I0509 23:29:31.734023 2576 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:29:31.734390 kubelet[2576]: I0509 23:29:31.734344 2576 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:29:31.734615 kubelet[2576]: I0509 23:29:31.734391 2576 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:29:31.734702 kubelet[2576]: I0509 23:29:31.734633 2576 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:29:31.734702 kubelet[2576]: I0509 23:29:31.734643 2576 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:29:31.734749 kubelet[2576]: I0509 23:29:31.734706 2576 state_mem.go:36] "Initialized new in-memory state store" May 9 23:29:31.734902 kubelet[2576]: I0509 23:29:31.734886 2576 kubelet.go:446] "Attempting to sync node with API server" May 9 23:29:31.734928 kubelet[2576]: I0509 23:29:31.734904 2576 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:29:31.734928 kubelet[2576]: I0509 23:29:31.734925 2576 kubelet.go:352] "Adding apiserver pod source" May 9 23:29:31.735070 kubelet[2576]: I0509 23:29:31.735055 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:29:31.736163 kubelet[2576]: I0509 23:29:31.736134 2576 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 9 23:29:31.736752 kubelet[2576]: I0509 23:29:31.736698 2576 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:29:31.737160 kubelet[2576]: I0509 23:29:31.737139 2576 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:29:31.737205 kubelet[2576]: I0509 23:29:31.737172 2576 server.go:1287] "Started kubelet" May 9 23:29:31.738960 kubelet[2576]: I0509 23:29:31.737544 2576 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:29:31.738960 kubelet[2576]: I0509 23:29:31.738198 2576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:29:31.738960 kubelet[2576]: I0509 23:29:31.738449 2576 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:29:31.739875 kubelet[2576]: I0509 23:29:31.739395 2576 server.go:490] "Adding debug handlers to kubelet server" May 9 23:29:31.741096 kubelet[2576]: E0509 23:29:31.741065 2576 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:29:31.741514 kubelet[2576]: I0509 23:29:31.741265 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:29:31.741514 kubelet[2576]: I0509 23:29:31.741457 2576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:29:31.743889 kubelet[2576]: E0509 23:29:31.741553 2576 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 23:29:31.743889 kubelet[2576]: I0509 23:29:31.741597 2576 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:29:31.743889 kubelet[2576]: I0509 23:29:31.741758 2576 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:29:31.743889 kubelet[2576]: I0509 23:29:31.741896 2576 reconciler.go:26] "Reconciler: start to sync state" May 9 23:29:31.743889 kubelet[2576]: I0509 23:29:31.743031 2576 factory.go:221] Registration of the systemd container factory successfully May 9 23:29:31.744148 kubelet[2576]: I0509 23:29:31.744115 2576 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:29:31.747222 kubelet[2576]: I0509 23:29:31.747189 2576 factory.go:221] Registration of the containerd container factory successfully May 9 23:29:31.775079 kubelet[2576]: I0509 23:29:31.774834 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:29:31.778602 kubelet[2576]: I0509 23:29:31.778080 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:29:31.778602 kubelet[2576]: I0509 23:29:31.778111 2576 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:29:31.778602 kubelet[2576]: I0509 23:29:31.778132 2576 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:29:31.778602 kubelet[2576]: I0509 23:29:31.778140 2576 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:29:31.778602 kubelet[2576]: E0509 23:29:31.778223 2576 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:29:31.809531 kubelet[2576]: I0509 23:29:31.809483 2576 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:29:31.809531 kubelet[2576]: I0509 23:29:31.809510 2576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:29:31.809531 kubelet[2576]: I0509 23:29:31.809531 2576 state_mem.go:36] "Initialized new in-memory state store" May 9 23:29:31.810016 kubelet[2576]: I0509 23:29:31.809986 2576 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:29:31.810073 kubelet[2576]: I0509 23:29:31.810010 2576 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:29:31.810073 kubelet[2576]: I0509 23:29:31.810030 2576 policy_none.go:49] "None policy: Start" May 9 23:29:31.810073 kubelet[2576]: I0509 23:29:31.810041 2576 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:29:31.810073 kubelet[2576]: I0509 23:29:31.810051 2576 state_mem.go:35] "Initializing new in-memory state store" May 9 23:29:31.810151 kubelet[2576]: I0509 23:29:31.810145 2576 state_mem.go:75] "Updated machine memory state" May 9 23:29:31.814603 kubelet[2576]: I0509 23:29:31.814575 2576 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:29:31.814756 kubelet[2576]: I0509 23:29:31.814731 2576 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:29:31.814785 kubelet[2576]: I0509 23:29:31.814750 2576 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:29:31.814978 kubelet[2576]: I0509 23:29:31.814957 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:29:31.815601 kubelet[2576]: E0509 23:29:31.815580 2576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:29:31.879624 kubelet[2576]: I0509 23:29:31.879424 2576 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 23:29:31.879624 kubelet[2576]: I0509 23:29:31.879516 2576 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 23:29:31.880070 kubelet[2576]: I0509 23:29:31.879980 2576 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:29:31.884542 kubelet[2576]: E0509 23:29:31.884442 2576 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 23:29:31.884652 kubelet[2576]: E0509 23:29:31.884629 2576 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 23:29:31.918912 kubelet[2576]: I0509 23:29:31.918885 2576 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 23:29:31.924744 kubelet[2576]: I0509 23:29:31.924704 2576 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 9 23:29:31.924895 kubelet[2576]: I0509 23:29:31.924771 2576 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 23:29:31.943041 kubelet[2576]: I0509 23:29:31.942991 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f4ff5c1784e445885a8642f685fee2d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8f4ff5c1784e445885a8642f685fee2d\") " pod="kube-system/kube-apiserver-localhost" May 9 23:29:31.943097 kubelet[2576]: I0509 23:29:31.943053 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:31.943097 kubelet[2576]: I0509 23:29:31.943085 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:31.943153 kubelet[2576]: I0509 23:29:31.943114 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 23:29:31.943153 kubelet[2576]: I0509 23:29:31.943141 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ff5c1784e445885a8642f685fee2d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f4ff5c1784e445885a8642f685fee2d\") " pod="kube-system/kube-apiserver-localhost" May 9 23:29:31.943197 kubelet[2576]: I0509 23:29:31.943168 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f4ff5c1784e445885a8642f685fee2d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8f4ff5c1784e445885a8642f685fee2d\") " pod="kube-system/kube-apiserver-localhost" May 9 23:29:31.943197 kubelet[2576]: I0509 23:29:31.943186 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:31.943233 kubelet[2576]: I0509 23:29:31.943201 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:31.943233 kubelet[2576]: I0509 23:29:31.943216 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 23:29:32.185673 kubelet[2576]: E0509 23:29:32.184881 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:32.185673 kubelet[2576]: E0509 23:29:32.184919 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:32.185673 kubelet[2576]: E0509 23:29:32.184960 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:32.256726 sudo[2614]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 23:29:32.257028 sudo[2614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 23:29:32.682382 sudo[2614]: pam_unix(sudo:session): session closed for user root May 9 23:29:32.735740 kubelet[2576]: I0509 23:29:32.735680 2576 apiserver.go:52] "Watching apiserver" May 9 23:29:32.742277 kubelet[2576]: I0509 23:29:32.742238 2576 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:29:32.797475 kubelet[2576]: E0509 23:29:32.797371 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:32.798025 kubelet[2576]: E0509 23:29:32.797557 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:32.798025 kubelet[2576]: I0509 23:29:32.797596 2576 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 23:29:32.804703 kubelet[2576]: E0509 23:29:32.804613 2576 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 23:29:32.804800 kubelet[2576]: E0509 23:29:32.804755 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:32.821024 kubelet[2576]: I0509 23:29:32.820926 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.820898125 podStartE2EDuration="1.820898125s" podCreationTimestamp="2025-05-09 23:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:29:32.820094756 +0000 UTC m=+1.148622702" watchObservedRunningTime="2025-05-09 23:29:32.820898125 +0000 UTC m=+1.149426071" May 9 23:29:32.836332 kubelet[2576]: I0509 23:29:32.836281 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.836249569 podStartE2EDuration="1.836249569s" podCreationTimestamp="2025-05-09 23:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:29:32.836053943 +0000 UTC m=+1.164581889" watchObservedRunningTime="2025-05-09 23:29:32.836249569 +0000 UTC m=+1.164777555" May 9 23:29:32.836467 kubelet[2576]: I0509 23:29:32.836359 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.836354417 podStartE2EDuration="1.836354417s" podCreationTimestamp="2025-05-09 23:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:29:32.828390833 +0000 UTC m=+1.156918779" watchObservedRunningTime="2025-05-09 23:29:32.836354417 +0000 UTC m=+1.164882403" May 9 23:29:33.799003 kubelet[2576]: E0509 23:29:33.798907 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:33.799003 kubelet[2576]: E0509 23:29:33.798942 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:35.384968 sudo[1655]: pam_unix(sudo:session): session closed for user root May 9 23:29:35.386652 sshd[1654]: Connection closed by 10.0.0.1 port 54002 May 9 23:29:35.387106 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 9 23:29:35.390488 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. May 9 23:29:35.391119 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:54002.service: Deactivated successfully. May 9 23:29:35.393443 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:29:35.393664 systemd[1]: session-7.scope: Consumed 7.293s CPU time, 263.4M memory peak. May 9 23:29:35.395415 systemd-logind[1438]: Removed session 7. May 9 23:29:36.232982 kubelet[2576]: I0509 23:29:36.232946 2576 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:29:36.236900 containerd[1452]: time="2025-05-09T23:29:36.235159092Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:29:36.237211 kubelet[2576]: I0509 23:29:36.235426 2576 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:29:37.038439 systemd[1]: Created slice kubepods-besteffort-pod14c924c2_d3b4_4b24_b1e9_b72924e621d1.slice - libcontainer container kubepods-besteffort-pod14c924c2_d3b4_4b24_b1e9_b72924e621d1.slice. May 9 23:29:37.052066 systemd[1]: Created slice kubepods-burstable-podb6c14fb3_aa11_45a3_8840_665ba358b454.slice - libcontainer container kubepods-burstable-podb6c14fb3_aa11_45a3_8840_665ba358b454.slice. May 9 23:29:37.075512 kubelet[2576]: I0509 23:29:37.075429 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-etc-cni-netd\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075512 kubelet[2576]: I0509 23:29:37.075471 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14c924c2-d3b4-4b24-b1e9-b72924e621d1-xtables-lock\") pod \"kube-proxy-ldddj\" (UID: \"14c924c2-d3b4-4b24-b1e9-b72924e621d1\") " pod="kube-system/kube-proxy-ldddj" May 9 23:29:37.075512 kubelet[2576]: I0509 23:29:37.075489 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14c924c2-d3b4-4b24-b1e9-b72924e621d1-lib-modules\") pod \"kube-proxy-ldddj\" (UID: \"14c924c2-d3b4-4b24-b1e9-b72924e621d1\") " pod="kube-system/kube-proxy-ldddj" May 9 23:29:37.075661 kubelet[2576]: I0509 23:29:37.075536 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-cgroup\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075661 kubelet[2576]: I0509 23:29:37.075555 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-bpf-maps\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075661 kubelet[2576]: I0509 23:29:37.075574 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-kernel\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075661 kubelet[2576]: I0509 23:29:37.075599 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-config-path\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075661 kubelet[2576]: I0509 23:29:37.075614 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-net\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075765 kubelet[2576]: I0509 23:29:37.075630 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts79d\" (UniqueName: \"kubernetes.io/projected/14c924c2-d3b4-4b24-b1e9-b72924e621d1-kube-api-access-ts79d\") pod \"kube-proxy-ldddj\" (UID: \"14c924c2-d3b4-4b24-b1e9-b72924e621d1\") " pod="kube-system/kube-proxy-ldddj" May 9 23:29:37.075765 kubelet[2576]: I0509 23:29:37.075646 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-run\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075765 kubelet[2576]: I0509 23:29:37.075663 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-hostproc\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075765 kubelet[2576]: I0509 23:29:37.075678 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-lib-modules\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075765 kubelet[2576]: I0509 23:29:37.075695 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6c14fb3-aa11-45a3-8840-665ba358b454-clustermesh-secrets\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075997 kubelet[2576]: I0509 23:29:37.075734 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhrb\" (UniqueName: \"kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-kube-api-access-5hhrb\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075997 kubelet[2576]: I0509 23:29:37.075781 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14c924c2-d3b4-4b24-b1e9-b72924e621d1-kube-proxy\") pod \"kube-proxy-ldddj\" (UID: \"14c924c2-d3b4-4b24-b1e9-b72924e621d1\") " pod="kube-system/kube-proxy-ldddj" May 9 23:29:37.075997 kubelet[2576]: I0509 23:29:37.075819 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-hubble-tls\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075997 kubelet[2576]: I0509 23:29:37.075848 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cni-path\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.075997 kubelet[2576]: I0509 23:29:37.075887 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-xtables-lock\") pod \"cilium-mh977\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " pod="kube-system/cilium-mh977" May 9 23:29:37.186516 kubelet[2576]: E0509 23:29:37.186481 2576 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 9 23:29:37.186629 kubelet[2576]: E0509 23:29:37.186534 2576 projected.go:194] Error preparing data for projected volume kube-api-access-ts79d for pod kube-system/kube-proxy-ldddj: configmap "kube-root-ca.crt" not found May 9 23:29:37.186629 kubelet[2576]: E0509 23:29:37.186586 2576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14c924c2-d3b4-4b24-b1e9-b72924e621d1-kube-api-access-ts79d podName:14c924c2-d3b4-4b24-b1e9-b72924e621d1 nodeName:}" failed. No retries permitted until 2025-05-09 23:29:37.686567004 +0000 UTC m=+6.015094950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ts79d" (UniqueName: "kubernetes.io/projected/14c924c2-d3b4-4b24-b1e9-b72924e621d1-kube-api-access-ts79d") pod "kube-proxy-ldddj" (UID: "14c924c2-d3b4-4b24-b1e9-b72924e621d1") : configmap "kube-root-ca.crt" not found May 9 23:29:37.188819 kubelet[2576]: E0509 23:29:37.188712 2576 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 9 23:29:37.188819 kubelet[2576]: E0509 23:29:37.188735 2576 projected.go:194] Error preparing data for projected volume kube-api-access-5hhrb for pod kube-system/cilium-mh977: configmap "kube-root-ca.crt" not found May 9 23:29:37.188819 kubelet[2576]: E0509 23:29:37.188805 2576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-kube-api-access-5hhrb podName:b6c14fb3-aa11-45a3-8840-665ba358b454 nodeName:}" failed. No retries permitted until 2025-05-09 23:29:37.688782915 +0000 UTC m=+6.017310861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5hhrb" (UniqueName: "kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-kube-api-access-5hhrb") pod "cilium-mh977" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454") : configmap "kube-root-ca.crt" not found May 9 23:29:37.445327 systemd[1]: Created slice kubepods-besteffort-pode5235020_a1de_4e25_93f9_9cab6d569c73.slice - libcontainer container kubepods-besteffort-pode5235020_a1de_4e25_93f9_9cab6d569c73.slice. May 9 23:29:37.477902 kubelet[2576]: I0509 23:29:37.477838 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szq9r\" (UniqueName: \"kubernetes.io/projected/e5235020-a1de-4e25-93f9-9cab6d569c73-kube-api-access-szq9r\") pod \"cilium-operator-6c4d7847fc-9kc2l\" (UID: \"e5235020-a1de-4e25-93f9-9cab6d569c73\") " pod="kube-system/cilium-operator-6c4d7847fc-9kc2l" May 9 23:29:37.477902 kubelet[2576]: I0509 23:29:37.477904 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5235020-a1de-4e25-93f9-9cab6d569c73-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9kc2l\" (UID: \"e5235020-a1de-4e25-93f9-9cab6d569c73\") " pod="kube-system/cilium-operator-6c4d7847fc-9kc2l" May 9 23:29:37.542155 kubelet[2576]: E0509 23:29:37.542104 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:37.749056 kubelet[2576]: E0509 23:29:37.748944 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:37.749586 containerd[1452]: time="2025-05-09T23:29:37.749547426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9kc2l,Uid:e5235020-a1de-4e25-93f9-9cab6d569c73,Namespace:kube-system,Attempt:0,}" May 9 23:29:37.803077 containerd[1452]: time="2025-05-09T23:29:37.803027488Z" level=info msg="connecting to shim a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee" address="unix:///run/containerd/s/b30b98afef35d20714039db0f06ab2d7043e2af356011f5760debecb28e1a37b" namespace=k8s.io protocol=ttrpc version=3 May 9 23:29:37.804887 kubelet[2576]: E0509 23:29:37.804839 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:37.834096 systemd[1]: Started cri-containerd-a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee.scope - libcontainer container a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee. May 9 23:29:37.864678 containerd[1452]: time="2025-05-09T23:29:37.864629543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9kc2l,Uid:e5235020-a1de-4e25-93f9-9cab6d569c73,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\"" May 9 23:29:37.865478 kubelet[2576]: E0509 23:29:37.865455 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:37.866881 containerd[1452]: time="2025-05-09T23:29:37.866816650Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:29:37.948247 kubelet[2576]: E0509 23:29:37.948215 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:37.948903 containerd[1452]: time="2025-05-09T23:29:37.948840923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ldddj,Uid:14c924c2-d3b4-4b24-b1e9-b72924e621d1,Namespace:kube-system,Attempt:0,}" May 9 23:29:37.955739 kubelet[2576]: E0509 23:29:37.955705 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:37.956346 containerd[1452]: time="2025-05-09T23:29:37.956197143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh977,Uid:b6c14fb3-aa11-45a3-8840-665ba358b454,Namespace:kube-system,Attempt:0,}" May 9 23:29:37.968941 containerd[1452]: time="2025-05-09T23:29:37.968886135Z" level=info msg="connecting to shim ff9e9713c78c58416bc56fedc46e781c2609a17a6d043f6f5c908cedfd4d04fa" address="unix:///run/containerd/s/5ea0989b2d9141a545f8f3c93296fe55ba542a7844eb2130653a95b9907cec35" namespace=k8s.io protocol=ttrpc version=3 May 9 23:29:37.978470 containerd[1452]: time="2025-05-09T23:29:37.976970524Z" level=info msg="connecting to shim 9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8" address="unix:///run/containerd/s/a036ee235eadb7298fe6b49089b87580f9bb1db8dce6fc0a4e4bbb58d0dae954" namespace=k8s.io protocol=ttrpc version=3 May 9 23:29:38.000145 systemd[1]: Started cri-containerd-ff9e9713c78c58416bc56fedc46e781c2609a17a6d043f6f5c908cedfd4d04fa.scope - libcontainer container ff9e9713c78c58416bc56fedc46e781c2609a17a6d043f6f5c908cedfd4d04fa. May 9 23:29:38.002989 systemd[1]: Started cri-containerd-9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8.scope - libcontainer container 9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8. May 9 23:29:38.029546 containerd[1452]: time="2025-05-09T23:29:38.029499639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mh977,Uid:b6c14fb3-aa11-45a3-8840-665ba358b454,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\"" May 9 23:29:38.030063 containerd[1452]: time="2025-05-09T23:29:38.029927249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ldddj,Uid:14c924c2-d3b4-4b24-b1e9-b72924e621d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9e9713c78c58416bc56fedc46e781c2609a17a6d043f6f5c908cedfd4d04fa\"" May 9 23:29:38.030452 kubelet[2576]: E0509 23:29:38.030424 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:38.030525 kubelet[2576]: E0509 23:29:38.030506 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:38.033485 containerd[1452]: time="2025-05-09T23:29:38.033410131Z" level=info msg="CreateContainer within sandbox \"ff9e9713c78c58416bc56fedc46e781c2609a17a6d043f6f5c908cedfd4d04fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:29:38.043377 containerd[1452]: time="2025-05-09T23:29:38.043330079Z" level=info msg="Container 6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:38.050050 containerd[1452]: time="2025-05-09T23:29:38.049996490Z" level=info msg="CreateContainer within sandbox \"ff9e9713c78c58416bc56fedc46e781c2609a17a6d043f6f5c908cedfd4d04fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86\"" May 9 23:29:38.050668 containerd[1452]: time="2025-05-09T23:29:38.050538633Z" level=info msg="StartContainer for \"6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86\"" May 9 23:29:38.052173 containerd[1452]: time="2025-05-09T23:29:38.052143979Z" level=info msg="connecting to shim 6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86" address="unix:///run/containerd/s/5ea0989b2d9141a545f8f3c93296fe55ba542a7844eb2130653a95b9907cec35" protocol=ttrpc version=3 May 9 23:29:38.079050 systemd[1]: Started cri-containerd-6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86.scope - libcontainer container 6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86. May 9 23:29:38.111947 containerd[1452]: time="2025-05-09T23:29:38.111907892Z" level=info msg="StartContainer for \"6f537d886cd935a2925a37e3b1267fb39a85dbac550b67e5e2fd40a78cca6c86\" returns successfully" May 9 23:29:38.307344 kubelet[2576]: E0509 23:29:38.306850 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:38.808946 kubelet[2576]: E0509 23:29:38.808911 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:38.812326 kubelet[2576]: E0509 23:29:38.812202 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:38.812326 kubelet[2576]: E0509 23:29:38.812221 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:38.819042 kubelet[2576]: I0509 23:29:38.818937 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ldddj" podStartSLOduration=1.8189193590000001 podStartE2EDuration="1.818919359s" podCreationTimestamp="2025-05-09 23:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:29:38.818312529 +0000 UTC m=+7.146840475" watchObservedRunningTime="2025-05-09 23:29:38.818919359 +0000 UTC m=+7.147447305" May 9 23:29:39.304048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955737370.mount: Deactivated successfully. May 9 23:29:39.748741 containerd[1452]: time="2025-05-09T23:29:39.748682665Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:39.749888 containerd[1452]: time="2025-05-09T23:29:39.749761583Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:29:39.750710 containerd[1452]: time="2025-05-09T23:29:39.750676243Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:39.751949 containerd[1452]: time="2025-05-09T23:29:39.751827809Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.884963673s" May 9 23:29:39.751949 containerd[1452]: time="2025-05-09T23:29:39.751882375Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:29:39.756430 containerd[1452]: time="2025-05-09T23:29:39.756173525Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:29:39.757397 containerd[1452]: time="2025-05-09T23:29:39.757332652Z" level=info msg="CreateContainer within sandbox \"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:29:39.784306 containerd[1452]: time="2025-05-09T23:29:39.783937884Z" level=info msg="Container f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:39.790105 containerd[1452]: time="2025-05-09T23:29:39.790046833Z" level=info msg="CreateContainer within sandbox \"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\"" May 9 23:29:39.790592 containerd[1452]: time="2025-05-09T23:29:39.790508843Z" level=info msg="StartContainer for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\"" May 9 23:29:39.791573 containerd[1452]: time="2025-05-09T23:29:39.791535516Z" level=info msg="connecting to shim f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61" address="unix:///run/containerd/s/b30b98afef35d20714039db0f06ab2d7043e2af356011f5760debecb28e1a37b" protocol=ttrpc version=3 May 9 23:29:39.820046 systemd[1]: Started cri-containerd-f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61.scope - libcontainer container f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61. May 9 23:29:39.846653 containerd[1452]: time="2025-05-09T23:29:39.846568539Z" level=info msg="StartContainer for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" returns successfully" May 9 23:29:40.828132 kubelet[2576]: E0509 23:29:40.828097 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:40.848252 kubelet[2576]: I0509 23:29:40.848156 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9kc2l" podStartSLOduration=1.958204479 podStartE2EDuration="3.848118704s" podCreationTimestamp="2025-05-09 23:29:37 +0000 UTC" firstStartedPulling="2025-05-09 23:29:37.866070679 +0000 UTC m=+6.194598625" lastFinishedPulling="2025-05-09 23:29:39.755984904 +0000 UTC m=+8.084512850" observedRunningTime="2025-05-09 23:29:40.847621413 +0000 UTC m=+9.176149359" watchObservedRunningTime="2025-05-09 23:29:40.848118704 +0000 UTC m=+9.176646650" May 9 23:29:41.827948 kubelet[2576]: E0509 23:29:41.827769 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:41.983356 kubelet[2576]: E0509 23:29:41.983314 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:42.839962 kubelet[2576]: E0509 23:29:42.839899 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:46.134507 update_engine[1444]: I20250509 23:29:46.134428 1444 update_attempter.cc:509] Updating boot flags... May 9 23:29:46.170964 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3017) May 9 23:29:46.221922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3015) May 9 23:29:46.259886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3015) May 9 23:29:53.272987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037592805.mount: Deactivated successfully. May 9 23:29:54.463839 containerd[1452]: time="2025-05-09T23:29:54.463687486Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:54.466886 containerd[1452]: time="2025-05-09T23:29:54.464533250Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:29:54.468431 containerd[1452]: time="2025-05-09T23:29:54.468393369Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:29:54.470543 containerd[1452]: time="2025-05-09T23:29:54.470512038Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.714290709s" May 9 23:29:54.470577 containerd[1452]: time="2025-05-09T23:29:54.470546840Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:29:54.472357 containerd[1452]: time="2025-05-09T23:29:54.472328892Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:29:54.478570 containerd[1452]: time="2025-05-09T23:29:54.478534932Z" level=info msg="Container 46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:54.484254 containerd[1452]: time="2025-05-09T23:29:54.484213945Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\"" May 9 23:29:54.484632 containerd[1452]: time="2025-05-09T23:29:54.484586844Z" level=info msg="StartContainer for \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\"" May 9 23:29:54.485476 containerd[1452]: time="2025-05-09T23:29:54.485436128Z" level=info msg="connecting to shim 46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d" address="unix:///run/containerd/s/a036ee235eadb7298fe6b49089b87580f9bb1db8dce6fc0a4e4bbb58d0dae954" protocol=ttrpc version=3 May 9 23:29:54.515091 systemd[1]: Started cri-containerd-46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d.scope - libcontainer container 46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d. May 9 23:29:54.538788 containerd[1452]: time="2025-05-09T23:29:54.538698554Z" level=info msg="StartContainer for \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" returns successfully" May 9 23:29:54.622706 systemd[1]: cri-containerd-46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d.scope: Deactivated successfully. May 9 23:29:54.622980 systemd[1]: cri-containerd-46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d.scope: Consumed 93ms CPU time, 6.8M memory peak, 48K read from disk, 3.1M written to disk. May 9 23:29:54.645725 containerd[1452]: time="2025-05-09T23:29:54.645671591Z" level=info msg="received exit event container_id:\"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" id:\"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" pid:3065 exited_at:{seconds:1746833394 nanos:641536018}" May 9 23:29:54.645900 containerd[1452]: time="2025-05-09T23:29:54.645848480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" id:\"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" pid:3065 exited_at:{seconds:1746833394 nanos:641536018}" May 9 23:29:54.676091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d-rootfs.mount: Deactivated successfully. May 9 23:29:54.864815 kubelet[2576]: E0509 23:29:54.864653 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:54.868524 containerd[1452]: time="2025-05-09T23:29:54.868465280Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:29:54.876649 containerd[1452]: time="2025-05-09T23:29:54.876381728Z" level=info msg="Container 75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:54.885627 containerd[1452]: time="2025-05-09T23:29:54.885468197Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\"" May 9 23:29:54.886050 containerd[1452]: time="2025-05-09T23:29:54.886015825Z" level=info msg="StartContainer for \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\"" May 9 23:29:54.887059 containerd[1452]: time="2025-05-09T23:29:54.887035638Z" level=info msg="connecting to shim 75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32" address="unix:///run/containerd/s/a036ee235eadb7298fe6b49089b87580f9bb1db8dce6fc0a4e4bbb58d0dae954" protocol=ttrpc version=3 May 9 23:29:54.908038 systemd[1]: Started cri-containerd-75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32.scope - libcontainer container 75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32. May 9 23:29:54.932994 containerd[1452]: time="2025-05-09T23:29:54.932916524Z" level=info msg="StartContainer for \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" returns successfully" May 9 23:29:54.952163 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:29:54.952373 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:29:54.952543 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:29:54.954109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:29:54.955608 systemd[1]: cri-containerd-75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32.scope: Deactivated successfully. May 9 23:29:54.956755 containerd[1452]: time="2025-05-09T23:29:54.956721551Z" level=info msg="received exit event container_id:\"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" id:\"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" pid:3110 exited_at:{seconds:1746833394 nanos:956449217}" May 9 23:29:54.956755 containerd[1452]: time="2025-05-09T23:29:54.956798555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" id:\"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" pid:3110 exited_at:{seconds:1746833394 nanos:956449217}" May 9 23:29:54.971553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:29:55.867935 kubelet[2576]: E0509 23:29:55.867333 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:55.869743 containerd[1452]: time="2025-05-09T23:29:55.869683147Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:29:55.902339 containerd[1452]: time="2025-05-09T23:29:55.902296396Z" level=info msg="Container 62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:55.905903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987186955.mount: Deactivated successfully. May 9 23:29:55.912586 containerd[1452]: time="2025-05-09T23:29:55.912538982Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\"" May 9 23:29:55.913064 containerd[1452]: time="2025-05-09T23:29:55.913038206Z" level=info msg="StartContainer for \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\"" May 9 23:29:55.914390 containerd[1452]: time="2025-05-09T23:29:55.914366072Z" level=info msg="connecting to shim 62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139" address="unix:///run/containerd/s/a036ee235eadb7298fe6b49089b87580f9bb1db8dce6fc0a4e4bbb58d0dae954" protocol=ttrpc version=3 May 9 23:29:55.936081 systemd[1]: Started cri-containerd-62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139.scope - libcontainer container 62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139. May 9 23:29:55.970660 containerd[1452]: time="2025-05-09T23:29:55.970615968Z" level=info msg="StartContainer for \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" returns successfully" May 9 23:29:55.986686 systemd[1]: cri-containerd-62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139.scope: Deactivated successfully. May 9 23:29:55.993597 containerd[1452]: time="2025-05-09T23:29:55.993557940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" id:\"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" pid:3158 exited_at:{seconds:1746833395 nanos:993125519}" May 9 23:29:55.993742 containerd[1452]: time="2025-05-09T23:29:55.993682786Z" level=info msg="received exit event container_id:\"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" id:\"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" pid:3158 exited_at:{seconds:1746833395 nanos:993125519}" May 9 23:29:56.014068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139-rootfs.mount: Deactivated successfully. May 9 23:29:56.873189 kubelet[2576]: E0509 23:29:56.872603 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:56.875643 containerd[1452]: time="2025-05-09T23:29:56.875398405Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:29:56.889717 containerd[1452]: time="2025-05-09T23:29:56.888785638Z" level=info msg="Container 491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:56.893549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105542752.mount: Deactivated successfully. May 9 23:29:56.898508 containerd[1452]: time="2025-05-09T23:29:56.898460895Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\"" May 9 23:29:56.900115 containerd[1452]: time="2025-05-09T23:29:56.898912117Z" level=info msg="StartContainer for \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\"" May 9 23:29:56.901562 containerd[1452]: time="2025-05-09T23:29:56.901531160Z" level=info msg="connecting to shim 491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d" address="unix:///run/containerd/s/a036ee235eadb7298fe6b49089b87580f9bb1db8dce6fc0a4e4bbb58d0dae954" protocol=ttrpc version=3 May 9 23:29:56.927028 systemd[1]: Started cri-containerd-491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d.scope - libcontainer container 491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d. May 9 23:29:56.959166 systemd[1]: cri-containerd-491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d.scope: Deactivated successfully. May 9 23:29:56.960502 containerd[1452]: time="2025-05-09T23:29:56.960469147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" id:\"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" pid:3196 exited_at:{seconds:1746833396 nanos:959287451}" May 9 23:29:56.962090 containerd[1452]: time="2025-05-09T23:29:56.961595600Z" level=info msg="received exit event container_id:\"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" id:\"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" pid:3196 exited_at:{seconds:1746833396 nanos:959287451}" May 9 23:29:56.968364 containerd[1452]: time="2025-05-09T23:29:56.965804559Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6c14fb3_aa11_45a3_8840_665ba358b454.slice/cri-containerd-491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d.scope/memory.events\": no such file or directory" May 9 23:29:56.968589 containerd[1452]: time="2025-05-09T23:29:56.968562729Z" level=info msg="StartContainer for \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" returns successfully" May 9 23:29:56.978092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d-rootfs.mount: Deactivated successfully. May 9 23:29:57.879125 kubelet[2576]: E0509 23:29:57.878921 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:57.883889 containerd[1452]: time="2025-05-09T23:29:57.882501380Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:29:57.896425 containerd[1452]: time="2025-05-09T23:29:57.896374089Z" level=info msg="Container cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9: CDI devices from CRI Config.CDIDevices: []" May 9 23:29:57.904166 containerd[1452]: time="2025-05-09T23:29:57.904121320Z" level=info msg="CreateContainer within sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\"" May 9 23:29:57.904682 containerd[1452]: time="2025-05-09T23:29:57.904649704Z" level=info msg="StartContainer for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\"" May 9 23:29:57.905620 containerd[1452]: time="2025-05-09T23:29:57.905569105Z" level=info msg="connecting to shim cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9" address="unix:///run/containerd/s/a036ee235eadb7298fe6b49089b87580f9bb1db8dce6fc0a4e4bbb58d0dae954" protocol=ttrpc version=3 May 9 23:29:57.924078 systemd[1]: Started cri-containerd-cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9.scope - libcontainer container cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9. May 9 23:29:57.963729 containerd[1452]: time="2025-05-09T23:29:57.961219228Z" level=info msg="StartContainer for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" returns successfully" May 9 23:29:58.067912 containerd[1452]: time="2025-05-09T23:29:58.067783376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" id:\"183a0f62b8ed2720fc7a728036a1d590729bfdedb8eb4d002f4d0d38fb48599f\" pid:3267 exited_at:{seconds:1746833398 nanos:57680137}" May 9 23:29:58.122947 kubelet[2576]: I0509 23:29:58.122062 2576 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 23:29:58.157894 systemd[1]: Created slice kubepods-burstable-pod5a86fd07_f312_40cc_8d37_4c00db4605e1.slice - libcontainer container kubepods-burstable-pod5a86fd07_f312_40cc_8d37_4c00db4605e1.slice. May 9 23:29:58.163983 systemd[1]: Created slice kubepods-burstable-podabe0c327_168e_42c4_a828_ec895bb9f655.slice - libcontainer container kubepods-burstable-podabe0c327_168e_42c4_a828_ec895bb9f655.slice. May 9 23:29:58.223672 kubelet[2576]: I0509 23:29:58.223631 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m4s\" (UniqueName: \"kubernetes.io/projected/5a86fd07-f312-40cc-8d37-4c00db4605e1-kube-api-access-p5m4s\") pod \"coredns-668d6bf9bc-j59vz\" (UID: \"5a86fd07-f312-40cc-8d37-4c00db4605e1\") " pod="kube-system/coredns-668d6bf9bc-j59vz" May 9 23:29:58.223808 kubelet[2576]: I0509 23:29:58.223681 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abe0c327-168e-42c4-a828-ec895bb9f655-config-volume\") pod \"coredns-668d6bf9bc-77r98\" (UID: \"abe0c327-168e-42c4-a828-ec895bb9f655\") " pod="kube-system/coredns-668d6bf9bc-77r98" May 9 23:29:58.223808 kubelet[2576]: I0509 23:29:58.223701 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kq6\" (UniqueName: \"kubernetes.io/projected/abe0c327-168e-42c4-a828-ec895bb9f655-kube-api-access-h9kq6\") pod \"coredns-668d6bf9bc-77r98\" (UID: \"abe0c327-168e-42c4-a828-ec895bb9f655\") " pod="kube-system/coredns-668d6bf9bc-77r98" May 9 23:29:58.223808 kubelet[2576]: I0509 23:29:58.223718 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a86fd07-f312-40cc-8d37-4c00db4605e1-config-volume\") pod \"coredns-668d6bf9bc-j59vz\" (UID: \"5a86fd07-f312-40cc-8d37-4c00db4605e1\") " pod="kube-system/coredns-668d6bf9bc-j59vz" May 9 23:29:58.461512 kubelet[2576]: E0509 23:29:58.461478 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:58.463442 containerd[1452]: time="2025-05-09T23:29:58.463171376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j59vz,Uid:5a86fd07-f312-40cc-8d37-4c00db4605e1,Namespace:kube-system,Attempt:0,}" May 9 23:29:58.466616 kubelet[2576]: E0509 23:29:58.466591 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:58.468098 containerd[1452]: time="2025-05-09T23:29:58.468002666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77r98,Uid:abe0c327-168e-42c4-a828-ec895bb9f655,Namespace:kube-system,Attempt:0,}" May 9 23:29:58.891353 kubelet[2576]: E0509 23:29:58.891236 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:29:58.984435 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:47896.service - OpenSSH per-connection server daemon (10.0.0.1:47896). May 9 23:29:59.041682 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 47896 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:29:59.043142 sshd-session[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:29:59.047572 systemd-logind[1438]: New session 8 of user core. May 9 23:29:59.056061 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:29:59.179503 sshd[3379]: Connection closed by 10.0.0.1 port 47896 May 9 23:29:59.179743 sshd-session[3375]: pam_unix(sshd:session): session closed for user core May 9 23:29:59.182447 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. May 9 23:29:59.184060 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:47896.service: Deactivated successfully. May 9 23:29:59.186556 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:29:59.187553 systemd-logind[1438]: Removed session 8. May 9 23:29:59.891851 kubelet[2576]: E0509 23:29:59.891823 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:00.145239 systemd-networkd[1391]: cilium_host: Link UP May 9 23:30:00.145433 systemd-networkd[1391]: cilium_net: Link UP May 9 23:30:00.145436 systemd-networkd[1391]: cilium_net: Gained carrier May 9 23:30:00.145646 systemd-networkd[1391]: cilium_host: Gained carrier May 9 23:30:00.145858 systemd-networkd[1391]: cilium_host: Gained IPv6LL May 9 23:30:00.233974 systemd-networkd[1391]: cilium_vxlan: Link UP May 9 23:30:00.233980 systemd-networkd[1391]: cilium_vxlan: Gained carrier May 9 23:30:00.532894 kernel: NET: Registered PF_ALG protocol family May 9 23:30:00.873996 systemd-networkd[1391]: cilium_net: Gained IPv6LL May 9 23:30:00.893805 kubelet[2576]: E0509 23:30:00.893763 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:01.107289 systemd-networkd[1391]: lxc_health: Link UP May 9 23:30:01.111026 systemd-networkd[1391]: lxc_health: Gained carrier May 9 23:30:01.589976 kernel: eth0: renamed from tmpa8d52 May 9 23:30:01.598898 kernel: eth0: renamed from tmp1196f May 9 23:30:01.604242 systemd-networkd[1391]: lxcf93959b35a8f: Link UP May 9 23:30:01.607231 systemd-networkd[1391]: lxc1c197e9f97bb: Link UP May 9 23:30:01.607501 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL May 9 23:30:01.607846 systemd-networkd[1391]: lxc1c197e9f97bb: Gained carrier May 9 23:30:01.608059 systemd-networkd[1391]: lxcf93959b35a8f: Gained carrier May 9 23:30:01.975497 kubelet[2576]: E0509 23:30:01.974065 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:01.991648 kubelet[2576]: I0509 23:30:01.991576 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mh977" podStartSLOduration=8.551618356 podStartE2EDuration="24.991560628s" podCreationTimestamp="2025-05-09 23:29:37 +0000 UTC" firstStartedPulling="2025-05-09 23:29:38.031244521 +0000 UTC m=+6.359772427" lastFinishedPulling="2025-05-09 23:29:54.471186753 +0000 UTC m=+22.799714699" observedRunningTime="2025-05-09 23:29:58.913974027 +0000 UTC m=+27.242501973" watchObservedRunningTime="2025-05-09 23:30:01.991560628 +0000 UTC m=+30.320088574" May 9 23:30:02.410959 systemd-networkd[1391]: lxc_health: Gained IPv6LL May 9 23:30:02.730002 systemd-networkd[1391]: lxcf93959b35a8f: Gained IPv6LL May 9 23:30:02.794004 systemd-networkd[1391]: lxc1c197e9f97bb: Gained IPv6LL May 9 23:30:02.896961 kubelet[2576]: E0509 23:30:02.896930 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:03.899125 kubelet[2576]: E0509 23:30:03.899046 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:04.199171 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:33386.service - OpenSSH per-connection server daemon (10.0.0.1:33386). May 9 23:30:04.255782 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 33386 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:04.257350 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:04.261934 systemd-logind[1438]: New session 9 of user core. May 9 23:30:04.271099 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:30:04.390185 sshd[3775]: Connection closed by 10.0.0.1 port 33386 May 9 23:30:04.390529 sshd-session[3773]: pam_unix(sshd:session): session closed for user core May 9 23:30:04.393842 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:33386.service: Deactivated successfully. May 9 23:30:04.397638 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:30:04.398601 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. May 9 23:30:04.399432 systemd-logind[1438]: Removed session 9. May 9 23:30:05.106118 containerd[1452]: time="2025-05-09T23:30:05.106070526Z" level=info msg="connecting to shim a8d52fd8d52c018583385df059fa8c6608479c21a8a1c3ff748ec8d1919e615a" address="unix:///run/containerd/s/460cf9a979d7811a640590eecc68aa7bd2a8e998496043b9bf31e3441a2fa935" namespace=k8s.io protocol=ttrpc version=3 May 9 23:30:05.107210 containerd[1452]: time="2025-05-09T23:30:05.106803591Z" level=info msg="connecting to shim 1196fd2f5544ca861bf852c1bdc5ed55bc7a00d9eb293ad93fe3ba8002d91299" address="unix:///run/containerd/s/babc18b6ecc7f70aae90245f859765670d06b05992a9898231de6975d5aa5315" namespace=k8s.io protocol=ttrpc version=3 May 9 23:30:05.136066 systemd[1]: Started cri-containerd-a8d52fd8d52c018583385df059fa8c6608479c21a8a1c3ff748ec8d1919e615a.scope - libcontainer container a8d52fd8d52c018583385df059fa8c6608479c21a8a1c3ff748ec8d1919e615a. May 9 23:30:05.146037 systemd[1]: Started cri-containerd-1196fd2f5544ca861bf852c1bdc5ed55bc7a00d9eb293ad93fe3ba8002d91299.scope - libcontainer container 1196fd2f5544ca861bf852c1bdc5ed55bc7a00d9eb293ad93fe3ba8002d91299. May 9 23:30:05.154414 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:30:05.157031 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 23:30:05.181923 containerd[1452]: time="2025-05-09T23:30:05.181885909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j59vz,Uid:5a86fd07-f312-40cc-8d37-4c00db4605e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8d52fd8d52c018583385df059fa8c6608479c21a8a1c3ff748ec8d1919e615a\"" May 9 23:30:05.182730 kubelet[2576]: E0509 23:30:05.182707 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:05.185090 containerd[1452]: time="2025-05-09T23:30:05.185047616Z" level=info msg="CreateContainer within sandbox \"a8d52fd8d52c018583385df059fa8c6608479c21a8a1c3ff748ec8d1919e615a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:30:05.185349 containerd[1452]: time="2025-05-09T23:30:05.185319065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77r98,Uid:abe0c327-168e-42c4-a828-ec895bb9f655,Namespace:kube-system,Attempt:0,} returns sandbox id \"1196fd2f5544ca861bf852c1bdc5ed55bc7a00d9eb293ad93fe3ba8002d91299\"" May 9 23:30:05.185968 kubelet[2576]: E0509 23:30:05.185941 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:05.187891 containerd[1452]: time="2025-05-09T23:30:05.187738826Z" level=info msg="CreateContainer within sandbox \"1196fd2f5544ca861bf852c1bdc5ed55bc7a00d9eb293ad93fe3ba8002d91299\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:30:05.193182 containerd[1452]: time="2025-05-09T23:30:05.193131847Z" level=info msg="Container b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:05.201619 containerd[1452]: time="2025-05-09T23:30:05.201576810Z" level=info msg="Container 44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:05.206239 containerd[1452]: time="2025-05-09T23:30:05.206205405Z" level=info msg="CreateContainer within sandbox \"a8d52fd8d52c018583385df059fa8c6608479c21a8a1c3ff748ec8d1919e615a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331\"" May 9 23:30:05.207238 containerd[1452]: time="2025-05-09T23:30:05.207092155Z" level=info msg="StartContainer for \"b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331\"" May 9 23:30:05.209413 containerd[1452]: time="2025-05-09T23:30:05.209354711Z" level=info msg="connecting to shim b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331" address="unix:///run/containerd/s/460cf9a979d7811a640590eecc68aa7bd2a8e998496043b9bf31e3441a2fa935" protocol=ttrpc version=3 May 9 23:30:05.212137 containerd[1452]: time="2025-05-09T23:30:05.212101643Z" level=info msg="CreateContainer within sandbox \"1196fd2f5544ca861bf852c1bdc5ed55bc7a00d9eb293ad93fe3ba8002d91299\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9\"" May 9 23:30:05.212825 containerd[1452]: time="2025-05-09T23:30:05.212797026Z" level=info msg="StartContainer for \"44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9\"" May 9 23:30:05.217074 containerd[1452]: time="2025-05-09T23:30:05.216440789Z" level=info msg="connecting to shim 44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9" address="unix:///run/containerd/s/babc18b6ecc7f70aae90245f859765670d06b05992a9898231de6975d5aa5315" protocol=ttrpc version=3 May 9 23:30:05.235153 systemd[1]: Started cri-containerd-b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331.scope - libcontainer container b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331. May 9 23:30:05.238400 systemd[1]: Started cri-containerd-44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9.scope - libcontainer container 44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9. May 9 23:30:05.271116 containerd[1452]: time="2025-05-09T23:30:05.270856054Z" level=info msg="StartContainer for \"b87c875d52e6650f9a37f69630f361bcdb976c9600c0c4c6fd6c038518b3e331\" returns successfully" May 9 23:30:05.275811 containerd[1452]: time="2025-05-09T23:30:05.275769219Z" level=info msg="StartContainer for \"44b2218aa09657fe5a12bd0bb347316a432b15d9f917dca5af4f737158dcc9c9\" returns successfully" May 9 23:30:05.905384 kubelet[2576]: E0509 23:30:05.905319 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:05.907878 kubelet[2576]: E0509 23:30:05.907591 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:05.928396 kubelet[2576]: I0509 23:30:05.928045 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-77r98" podStartSLOduration=28.92802714 podStartE2EDuration="28.92802714s" podCreationTimestamp="2025-05-09 23:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:30:05.917026611 +0000 UTC m=+34.245554517" watchObservedRunningTime="2025-05-09 23:30:05.92802714 +0000 UTC m=+34.256555086" May 9 23:30:05.928396 kubelet[2576]: I0509 23:30:05.928154 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j59vz" podStartSLOduration=28.928149064 podStartE2EDuration="28.928149064s" podCreationTimestamp="2025-05-09 23:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:30:05.927492562 +0000 UTC m=+34.256020508" watchObservedRunningTime="2025-05-09 23:30:05.928149064 +0000 UTC m=+34.256677050" May 9 23:30:06.909010 kubelet[2576]: E0509 23:30:06.908979 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:06.909010 kubelet[2576]: E0509 23:30:06.909004 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:07.910455 kubelet[2576]: E0509 23:30:07.910403 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:07.911251 kubelet[2576]: E0509 23:30:07.911224 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:09.405048 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:33398.service - OpenSSH per-connection server daemon (10.0.0.1:33398). May 9 23:30:09.453074 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:09.455112 sshd-session[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:09.461027 systemd-logind[1438]: New session 10 of user core. May 9 23:30:09.469021 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 23:30:09.590022 sshd[3972]: Connection closed by 10.0.0.1 port 33398 May 9 23:30:09.590569 sshd-session[3970]: pam_unix(sshd:session): session closed for user core May 9 23:30:09.593810 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. May 9 23:30:09.594231 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:33398.service: Deactivated successfully. May 9 23:30:09.596109 systemd[1]: session-10.scope: Deactivated successfully. May 9 23:30:09.596981 systemd-logind[1438]: Removed session 10. May 9 23:30:14.608139 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:55374.service - OpenSSH per-connection server daemon (10.0.0.1:55374). May 9 23:30:14.657184 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 55374 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:14.658335 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:14.662514 systemd-logind[1438]: New session 11 of user core. May 9 23:30:14.672086 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 23:30:14.782991 sshd[3989]: Connection closed by 10.0.0.1 port 55374 May 9 23:30:14.783330 sshd-session[3987]: pam_unix(sshd:session): session closed for user core May 9 23:30:14.793196 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:55374.service: Deactivated successfully. May 9 23:30:14.795616 systemd[1]: session-11.scope: Deactivated successfully. May 9 23:30:14.796799 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. May 9 23:30:14.798359 systemd-logind[1438]: Removed session 11. May 9 23:30:14.800810 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:55378.service - OpenSSH per-connection server daemon (10.0.0.1:55378). May 9 23:30:14.847721 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 55378 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:14.849069 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:14.853926 systemd-logind[1438]: New session 12 of user core. May 9 23:30:14.869014 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 23:30:15.014082 sshd[4005]: Connection closed by 10.0.0.1 port 55378 May 9 23:30:15.014794 sshd-session[4003]: pam_unix(sshd:session): session closed for user core May 9 23:30:15.026837 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:55378.service: Deactivated successfully. May 9 23:30:15.030595 systemd[1]: session-12.scope: Deactivated successfully. May 9 23:30:15.035012 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. May 9 23:30:15.038230 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:55388.service - OpenSSH per-connection server daemon (10.0.0.1:55388). May 9 23:30:15.040270 systemd-logind[1438]: Removed session 12. May 9 23:30:15.094682 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 55388 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:15.095931 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:15.100633 systemd-logind[1438]: New session 13 of user core. May 9 23:30:15.110000 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 23:30:15.221681 sshd[4018]: Connection closed by 10.0.0.1 port 55388 May 9 23:30:15.222291 sshd-session[4015]: pam_unix(sshd:session): session closed for user core May 9 23:30:15.227845 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:55388.service: Deactivated successfully. May 9 23:30:15.230000 systemd[1]: session-13.scope: Deactivated successfully. May 9 23:30:15.230762 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. May 9 23:30:15.232270 systemd-logind[1438]: Removed session 13. May 9 23:30:20.233340 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:55392.service - OpenSSH per-connection server daemon (10.0.0.1:55392). May 9 23:30:20.274813 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 55392 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:20.276107 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:20.279844 systemd-logind[1438]: New session 14 of user core. May 9 23:30:20.289008 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 23:30:20.397786 sshd[4033]: Connection closed by 10.0.0.1 port 55392 May 9 23:30:20.398123 sshd-session[4031]: pam_unix(sshd:session): session closed for user core May 9 23:30:20.401730 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:55392.service: Deactivated successfully. May 9 23:30:20.403573 systemd[1]: session-14.scope: Deactivated successfully. May 9 23:30:20.404203 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. May 9 23:30:20.404973 systemd-logind[1438]: Removed session 14. May 9 23:30:25.420237 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). May 9 23:30:25.463087 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:25.464623 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:25.472301 systemd-logind[1438]: New session 15 of user core. May 9 23:30:25.480593 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 23:30:25.611458 sshd[4048]: Connection closed by 10.0.0.1 port 42438 May 9 23:30:25.610528 sshd-session[4046]: pam_unix(sshd:session): session closed for user core May 9 23:30:25.626023 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:42438.service: Deactivated successfully. May 9 23:30:25.627440 systemd[1]: session-15.scope: Deactivated successfully. May 9 23:30:25.629056 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. May 9 23:30:25.630927 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). May 9 23:30:25.632273 systemd-logind[1438]: Removed session 15. May 9 23:30:25.681342 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:25.682568 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:25.686798 systemd-logind[1438]: New session 16 of user core. May 9 23:30:25.693990 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 23:30:25.937269 sshd[4064]: Connection closed by 10.0.0.1 port 42440 May 9 23:30:25.938640 sshd-session[4061]: pam_unix(sshd:session): session closed for user core May 9 23:30:25.946008 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:42440.service: Deactivated successfully. May 9 23:30:25.947390 systemd[1]: session-16.scope: Deactivated successfully. May 9 23:30:25.948030 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. May 9 23:30:25.949694 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:42450.service - OpenSSH per-connection server daemon (10.0.0.1:42450). May 9 23:30:25.953009 systemd-logind[1438]: Removed session 16. May 9 23:30:26.008245 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 42450 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:26.009399 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:26.013477 systemd-logind[1438]: New session 17 of user core. May 9 23:30:26.021984 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 23:30:26.744247 sshd[4078]: Connection closed by 10.0.0.1 port 42450 May 9 23:30:26.744664 sshd-session[4075]: pam_unix(sshd:session): session closed for user core May 9 23:30:26.756391 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:42450.service: Deactivated successfully. May 9 23:30:26.761528 systemd[1]: session-17.scope: Deactivated successfully. May 9 23:30:26.766395 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. May 9 23:30:26.770712 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). May 9 23:30:26.773288 systemd-logind[1438]: Removed session 17. May 9 23:30:26.823247 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:26.824412 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:26.830606 systemd-logind[1438]: New session 18 of user core. May 9 23:30:26.837011 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 23:30:27.054753 sshd[4103]: Connection closed by 10.0.0.1 port 42460 May 9 23:30:27.054465 sshd-session[4100]: pam_unix(sshd:session): session closed for user core May 9 23:30:27.065258 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:42460.service: Deactivated successfully. May 9 23:30:27.068608 systemd[1]: session-18.scope: Deactivated successfully. May 9 23:30:27.069536 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. May 9 23:30:27.071555 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:42466.service - OpenSSH per-connection server daemon (10.0.0.1:42466). May 9 23:30:27.072267 systemd-logind[1438]: Removed session 18. May 9 23:30:27.124659 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 42466 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:27.125957 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:27.129926 systemd-logind[1438]: New session 19 of user core. May 9 23:30:27.143095 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 23:30:27.248841 sshd[4117]: Connection closed by 10.0.0.1 port 42466 May 9 23:30:27.249350 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 9 23:30:27.252667 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:42466.service: Deactivated successfully. May 9 23:30:27.254742 systemd[1]: session-19.scope: Deactivated successfully. May 9 23:30:27.256400 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. May 9 23:30:27.257204 systemd-logind[1438]: Removed session 19. May 9 23:30:32.261367 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:42470.service - OpenSSH per-connection server daemon (10.0.0.1:42470). May 9 23:30:32.313208 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 42470 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:32.314435 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:32.318637 systemd-logind[1438]: New session 20 of user core. May 9 23:30:32.326065 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 23:30:32.432275 sshd[4138]: Connection closed by 10.0.0.1 port 42470 May 9 23:30:32.432631 sshd-session[4136]: pam_unix(sshd:session): session closed for user core May 9 23:30:32.436253 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:42470.service: Deactivated successfully. May 9 23:30:32.438036 systemd[1]: session-20.scope: Deactivated successfully. May 9 23:30:32.440384 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. May 9 23:30:32.441221 systemd-logind[1438]: Removed session 20. May 9 23:30:37.445233 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:48104.service - OpenSSH per-connection server daemon (10.0.0.1:48104). May 9 23:30:37.495622 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 48104 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:37.496712 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:37.500327 systemd-logind[1438]: New session 21 of user core. May 9 23:30:37.508038 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 23:30:37.615904 sshd[4154]: Connection closed by 10.0.0.1 port 48104 May 9 23:30:37.615607 sshd-session[4152]: pam_unix(sshd:session): session closed for user core May 9 23:30:37.618974 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. May 9 23:30:37.619572 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:48104.service: Deactivated successfully. May 9 23:30:37.621930 systemd[1]: session-21.scope: Deactivated successfully. May 9 23:30:37.624853 systemd-logind[1438]: Removed session 21. May 9 23:30:42.627306 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:51952.service - OpenSSH per-connection server daemon (10.0.0.1:51952). May 9 23:30:42.681291 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 51952 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:42.682404 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:42.686509 systemd-logind[1438]: New session 22 of user core. May 9 23:30:42.693037 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 23:30:42.799896 sshd[4171]: Connection closed by 10.0.0.1 port 51952 May 9 23:30:42.800147 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 9 23:30:42.803226 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:51952.service: Deactivated successfully. May 9 23:30:42.805592 systemd[1]: session-22.scope: Deactivated successfully. May 9 23:30:42.808316 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. May 9 23:30:42.810370 systemd-logind[1438]: Removed session 22. May 9 23:30:47.815356 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). May 9 23:30:47.860935 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:47.862086 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:47.866223 systemd-logind[1438]: New session 23 of user core. May 9 23:30:47.874078 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 23:30:47.980893 sshd[4186]: Connection closed by 10.0.0.1 port 51968 May 9 23:30:47.981248 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 9 23:30:47.998952 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:51968.service: Deactivated successfully. May 9 23:30:48.001706 systemd[1]: session-23.scope: Deactivated successfully. May 9 23:30:48.003277 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. May 9 23:30:48.005153 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:51970.service - OpenSSH per-connection server daemon (10.0.0.1:51970). May 9 23:30:48.006279 systemd-logind[1438]: Removed session 23. May 9 23:30:48.045462 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 51970 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:48.046878 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:48.050814 systemd-logind[1438]: New session 24 of user core. May 9 23:30:48.067126 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 23:30:50.583208 containerd[1452]: time="2025-05-09T23:30:50.583163767Z" level=info msg="StopContainer for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" with timeout 30 (s)" May 9 23:30:50.594710 containerd[1452]: time="2025-05-09T23:30:50.594079652Z" level=info msg="Stop container \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" with signal terminated" May 9 23:30:50.604623 systemd[1]: cri-containerd-f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61.scope: Deactivated successfully. May 9 23:30:50.606477 systemd[1]: cri-containerd-f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61.scope: Consumed 402ms CPU time, 26.1M memory peak, 1.1M read from disk, 4K written to disk. May 9 23:30:50.608168 containerd[1452]: time="2025-05-09T23:30:50.606550275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" id:\"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" pid:2983 exited_at:{seconds:1746833450 nanos:606194551}" May 9 23:30:50.614158 containerd[1452]: time="2025-05-09T23:30:50.614064161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" id:\"35fcefdc38ccb6b5e2f706110b92edc9260be5cc09d74e94fc98ed3a8a19146f\" pid:4224 exited_at:{seconds:1746833450 nanos:613672556}" May 9 23:30:50.616750 containerd[1452]: time="2025-05-09T23:30:50.616607910Z" level=info msg="StopContainer for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" with timeout 2 (s)" May 9 23:30:50.617012 containerd[1452]: time="2025-05-09T23:30:50.616989754Z" level=info msg="Stop container \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" with signal terminated" May 9 23:30:50.626564 systemd-networkd[1391]: lxc_health: Link DOWN May 9 23:30:50.626579 systemd-networkd[1391]: lxc_health: Lost carrier May 9 23:30:50.630503 containerd[1452]: time="2025-05-09T23:30:50.630399188Z" level=info msg="received exit event container_id:\"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" id:\"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" pid:2983 exited_at:{seconds:1746833450 nanos:606194551}" May 9 23:30:50.642650 containerd[1452]: time="2025-05-09T23:30:50.641877239Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:30:50.642076 systemd[1]: cri-containerd-cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9.scope: Deactivated successfully. May 9 23:30:50.642376 systemd[1]: cri-containerd-cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9.scope: Consumed 6.380s CPU time, 125.4M memory peak, 188K read from disk, 12.9M written to disk. May 9 23:30:50.643368 containerd[1452]: time="2025-05-09T23:30:50.643044572Z" level=info msg="received exit event container_id:\"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" id:\"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" pid:3236 exited_at:{seconds:1746833450 nanos:642283204}" May 9 23:30:50.643368 containerd[1452]: time="2025-05-09T23:30:50.643123933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" id:\"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" pid:3236 exited_at:{seconds:1746833450 nanos:642283204}" May 9 23:30:50.653430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61-rootfs.mount: Deactivated successfully. May 9 23:30:50.669651 containerd[1452]: time="2025-05-09T23:30:50.669509515Z" level=info msg="StopContainer for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" returns successfully" May 9 23:30:50.670925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9-rootfs.mount: Deactivated successfully. May 9 23:30:50.673891 containerd[1452]: time="2025-05-09T23:30:50.673848805Z" level=info msg="StopPodSandbox for \"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\"" May 9 23:30:50.673996 containerd[1452]: time="2025-05-09T23:30:50.673978287Z" level=info msg="Container to stop \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:30:50.675602 containerd[1452]: time="2025-05-09T23:30:50.675554305Z" level=info msg="StopContainer for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" returns successfully" May 9 23:30:50.676235 containerd[1452]: time="2025-05-09T23:30:50.675994470Z" level=info msg="StopPodSandbox for \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\"" May 9 23:30:50.676235 containerd[1452]: time="2025-05-09T23:30:50.676060350Z" level=info msg="Container to stop \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:30:50.676235 containerd[1452]: time="2025-05-09T23:30:50.676073791Z" level=info msg="Container to stop \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:30:50.676235 containerd[1452]: time="2025-05-09T23:30:50.676082311Z" level=info msg="Container to stop \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:30:50.676235 containerd[1452]: time="2025-05-09T23:30:50.676090791Z" level=info msg="Container to stop \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:30:50.676235 containerd[1452]: time="2025-05-09T23:30:50.676098671Z" level=info msg="Container to stop \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 23:30:50.681161 systemd[1]: cri-containerd-a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee.scope: Deactivated successfully. May 9 23:30:50.694563 containerd[1452]: time="2025-05-09T23:30:50.694527442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" id:\"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" pid:2694 exit_status:137 exited_at:{seconds:1746833450 nanos:694002836}" May 9 23:30:50.698302 systemd[1]: cri-containerd-9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8.scope: Deactivated successfully. May 9 23:30:50.724600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8-rootfs.mount: Deactivated successfully. May 9 23:30:50.725215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee-rootfs.mount: Deactivated successfully. May 9 23:30:50.729065 containerd[1452]: time="2025-05-09T23:30:50.728354989Z" level=info msg="shim disconnected" id=9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8 namespace=k8s.io May 9 23:30:50.729065 containerd[1452]: time="2025-05-09T23:30:50.728428790Z" level=warning msg="cleaning up after shim disconnected" id=9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8 namespace=k8s.io May 9 23:30:50.729065 containerd[1452]: time="2025-05-09T23:30:50.728486271Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:30:50.730481 containerd[1452]: time="2025-05-09T23:30:50.730402332Z" level=info msg="shim disconnected" id=a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee namespace=k8s.io May 9 23:30:50.730481 containerd[1452]: time="2025-05-09T23:30:50.730439773Z" level=warning msg="cleaning up after shim disconnected" id=a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee namespace=k8s.io May 9 23:30:50.730481 containerd[1452]: time="2025-05-09T23:30:50.730464733Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:30:50.731123 containerd[1452]: time="2025-05-09T23:30:50.728502671Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/b30b98afef35d20714039db0f06ab2d7043e2af356011f5760debecb28e1a37b->@: write: broken pipe" runtime=io.containerd.runc.v2 May 9 23:30:50.747904 containerd[1452]: time="2025-05-09T23:30:50.747151084Z" level=info msg="received exit event sandbox_id:\"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" exit_status:137 exited_at:{seconds:1746833450 nanos:698691089}" May 9 23:30:50.748572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8-shm.mount: Deactivated successfully. May 9 23:30:50.748896 containerd[1452]: time="2025-05-09T23:30:50.748815183Z" level=info msg="TearDown network for sandbox \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" successfully" May 9 23:30:50.748896 containerd[1452]: time="2025-05-09T23:30:50.748844104Z" level=info msg="StopPodSandbox for \"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" returns successfully" May 9 23:30:50.752050 containerd[1452]: time="2025-05-09T23:30:50.752005500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" id:\"9f52b5a12660bfde495cfd06d24490512f4333f864bf930142291ddbe43e57f8\" pid:2775 exit_status:137 exited_at:{seconds:1746833450 nanos:698691089}" May 9 23:30:50.752544 containerd[1452]: time="2025-05-09T23:30:50.752513386Z" level=info msg="TearDown network for sandbox \"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" successfully" May 9 23:30:50.752544 containerd[1452]: time="2025-05-09T23:30:50.752542466Z" level=info msg="StopPodSandbox for \"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" returns successfully" May 9 23:30:50.758873 containerd[1452]: time="2025-05-09T23:30:50.758832098Z" level=info msg="received exit event sandbox_id:\"a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee\" exit_status:137 exited_at:{seconds:1746833450 nanos:694002836}" May 9 23:30:50.850578 kubelet[2576]: I0509 23:30:50.849706 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-etc-cni-netd\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.850578 kubelet[2576]: I0509 23:30:50.849817 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-cgroup\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.850578 kubelet[2576]: I0509 23:30:50.849839 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cni-path\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.850578 kubelet[2576]: I0509 23:30:50.849878 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-hostproc\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.850578 kubelet[2576]: I0509 23:30:50.849903 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-config-path\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.850578 kubelet[2576]: I0509 23:30:50.849927 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6c14fb3-aa11-45a3-8840-665ba358b454-clustermesh-secrets\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851052 kubelet[2576]: I0509 23:30:50.849945 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szq9r\" (UniqueName: \"kubernetes.io/projected/e5235020-a1de-4e25-93f9-9cab6d569c73-kube-api-access-szq9r\") pod \"e5235020-a1de-4e25-93f9-9cab6d569c73\" (UID: \"e5235020-a1de-4e25-93f9-9cab6d569c73\") " May 9 23:30:50.851052 kubelet[2576]: I0509 23:30:50.849960 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-run\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851052 kubelet[2576]: I0509 23:30:50.849976 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hhrb\" (UniqueName: \"kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-kube-api-access-5hhrb\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851052 kubelet[2576]: I0509 23:30:50.849991 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-bpf-maps\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851052 kubelet[2576]: I0509 23:30:50.850007 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-hubble-tls\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851052 kubelet[2576]: I0509 23:30:50.850021 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-xtables-lock\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851180 kubelet[2576]: I0509 23:30:50.850038 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-kernel\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851180 kubelet[2576]: I0509 23:30:50.850052 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-net\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851180 kubelet[2576]: I0509 23:30:50.850066 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-lib-modules\") pod \"b6c14fb3-aa11-45a3-8840-665ba358b454\" (UID: \"b6c14fb3-aa11-45a3-8840-665ba358b454\") " May 9 23:30:50.851180 kubelet[2576]: I0509 23:30:50.850083 2576 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5235020-a1de-4e25-93f9-9cab6d569c73-cilium-config-path\") pod \"e5235020-a1de-4e25-93f9-9cab6d569c73\" (UID: \"e5235020-a1de-4e25-93f9-9cab6d569c73\") " May 9 23:30:50.853651 kubelet[2576]: I0509 23:30:50.852985 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.853651 kubelet[2576]: I0509 23:30:50.853022 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855074 kubelet[2576]: I0509 23:30:50.855031 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5235020-a1de-4e25-93f9-9cab6d569c73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5235020-a1de-4e25-93f9-9cab6d569c73" (UID: "e5235020-a1de-4e25-93f9-9cab6d569c73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 23:30:50.855116 kubelet[2576]: I0509 23:30:50.855093 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855366 kubelet[2576]: I0509 23:30:50.855323 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 23:30:50.855410 kubelet[2576]: I0509 23:30:50.855373 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855410 kubelet[2576]: I0509 23:30:50.855391 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855460 kubelet[2576]: I0509 23:30:50.855411 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855460 kubelet[2576]: I0509 23:30:50.855431 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855460 kubelet[2576]: I0509 23:30:50.855446 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855634 kubelet[2576]: I0509 23:30:50.855576 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.855634 kubelet[2576]: I0509 23:30:50.855612 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 23:30:50.856857 kubelet[2576]: I0509 23:30:50.856825 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5235020-a1de-4e25-93f9-9cab6d569c73-kube-api-access-szq9r" (OuterVolumeSpecName: "kube-api-access-szq9r") pod "e5235020-a1de-4e25-93f9-9cab6d569c73" (UID: "e5235020-a1de-4e25-93f9-9cab6d569c73"). InnerVolumeSpecName "kube-api-access-szq9r". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:30:50.857650 kubelet[2576]: I0509 23:30:50.857615 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-kube-api-access-5hhrb" (OuterVolumeSpecName: "kube-api-access-5hhrb") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "kube-api-access-5hhrb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:30:50.857866 kubelet[2576]: I0509 23:30:50.857807 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 23:30:50.858635 kubelet[2576]: I0509 23:30:50.858597 2576 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c14fb3-aa11-45a3-8840-665ba358b454-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6c14fb3-aa11-45a3-8840-665ba358b454" (UID: "b6c14fb3-aa11-45a3-8840-665ba358b454"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 9 23:30:50.950954 kubelet[2576]: I0509 23:30:50.950910 2576 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.950954 kubelet[2576]: I0509 23:30:50.950946 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.950954 kubelet[2576]: I0509 23:30:50.950957 2576 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6c14fb3-aa11-45a3-8840-665ba358b454-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.950954 kubelet[2576]: I0509 23:30:50.950965 2576 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-szq9r\" (UniqueName: \"kubernetes.io/projected/e5235020-a1de-4e25-93f9-9cab6d569c73-kube-api-access-szq9r\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.950977 2576 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5hhrb\" (UniqueName: \"kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-kube-api-access-5hhrb\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.950985 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.950992 2576 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.951000 2576 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.951008 2576 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6c14fb3-aa11-45a3-8840-665ba358b454-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.951015 2576 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.951022 2576 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951141 kubelet[2576]: I0509 23:30:50.951032 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5235020-a1de-4e25-93f9-9cab6d569c73-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951305 kubelet[2576]: I0509 23:30:50.951039 2576 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951305 kubelet[2576]: I0509 23:30:50.951047 2576 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951305 kubelet[2576]: I0509 23:30:50.951055 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 23:30:50.951305 kubelet[2576]: I0509 23:30:50.951062 2576 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6c14fb3-aa11-45a3-8840-665ba358b454-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 23:30:51.005470 kubelet[2576]: I0509 23:30:51.005449 2576 scope.go:117] "RemoveContainer" containerID="cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9" May 9 23:30:51.011343 containerd[1452]: time="2025-05-09T23:30:51.010047495Z" level=info msg="RemoveContainer for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\"" May 9 23:30:51.013382 systemd[1]: Removed slice kubepods-burstable-podb6c14fb3_aa11_45a3_8840_665ba358b454.slice - libcontainer container kubepods-burstable-podb6c14fb3_aa11_45a3_8840_665ba358b454.slice. May 9 23:30:51.013543 systemd[1]: kubepods-burstable-podb6c14fb3_aa11_45a3_8840_665ba358b454.slice: Consumed 6.563s CPU time, 125.7M memory peak, 248K read from disk, 16.1M written to disk. May 9 23:30:51.017447 systemd[1]: Removed slice kubepods-besteffort-pode5235020_a1de_4e25_93f9_9cab6d569c73.slice - libcontainer container kubepods-besteffort-pode5235020_a1de_4e25_93f9_9cab6d569c73.slice. May 9 23:30:51.018036 containerd[1452]: time="2025-05-09T23:30:51.017796384Z" level=info msg="RemoveContainer for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" returns successfully" May 9 23:30:51.017540 systemd[1]: kubepods-besteffort-pode5235020_a1de_4e25_93f9_9cab6d569c73.slice: Consumed 418ms CPU time, 26.4M memory peak, 1.1M read from disk, 4K written to disk. May 9 23:30:51.018194 kubelet[2576]: I0509 23:30:51.018168 2576 scope.go:117] "RemoveContainer" containerID="491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d" May 9 23:30:51.020167 containerd[1452]: time="2025-05-09T23:30:51.020141851Z" level=info msg="RemoveContainer for \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\"" May 9 23:30:51.023635 containerd[1452]: time="2025-05-09T23:30:51.023550851Z" level=info msg="RemoveContainer for \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" returns successfully" May 9 23:30:51.023723 kubelet[2576]: I0509 23:30:51.023691 2576 scope.go:117] "RemoveContainer" containerID="62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139" May 9 23:30:51.026101 containerd[1452]: time="2025-05-09T23:30:51.025951839Z" level=info msg="RemoveContainer for \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\"" May 9 23:30:51.030159 containerd[1452]: time="2025-05-09T23:30:51.030124967Z" level=info msg="RemoveContainer for \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" returns successfully" May 9 23:30:51.030736 kubelet[2576]: I0509 23:30:51.030703 2576 scope.go:117] "RemoveContainer" containerID="75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32" May 9 23:30:51.032585 containerd[1452]: time="2025-05-09T23:30:51.032558035Z" level=info msg="RemoveContainer for \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\"" May 9 23:30:51.038209 containerd[1452]: time="2025-05-09T23:30:51.038169780Z" level=info msg="RemoveContainer for \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" returns successfully" May 9 23:30:51.038380 kubelet[2576]: I0509 23:30:51.038360 2576 scope.go:117] "RemoveContainer" containerID="46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d" May 9 23:30:51.040343 containerd[1452]: time="2025-05-09T23:30:51.039897000Z" level=info msg="RemoveContainer for \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\"" May 9 23:30:51.043000 containerd[1452]: time="2025-05-09T23:30:51.042963196Z" level=info msg="RemoveContainer for \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" returns successfully" May 9 23:30:51.043171 kubelet[2576]: I0509 23:30:51.043144 2576 scope.go:117] "RemoveContainer" containerID="cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9" May 9 23:30:51.044416 containerd[1452]: time="2025-05-09T23:30:51.043346120Z" level=error msg="ContainerStatus for \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\": not found" May 9 23:30:51.049351 kubelet[2576]: E0509 23:30:51.049315 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\": not found" containerID="cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9" May 9 23:30:51.049425 kubelet[2576]: I0509 23:30:51.049355 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9"} err="failed to get container status \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf2538005147bffc150d0fef500d8dbe5275f6c0ad4f6287f8e2f54369341cc9\": not found" May 9 23:30:51.049449 kubelet[2576]: I0509 23:30:51.049426 2576 scope.go:117] "RemoveContainer" containerID="491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d" May 9 23:30:51.049668 containerd[1452]: time="2025-05-09T23:30:51.049616433Z" level=error msg="ContainerStatus for \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\": not found" May 9 23:30:51.049825 kubelet[2576]: E0509 23:30:51.049800 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\": not found" containerID="491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d" May 9 23:30:51.049877 kubelet[2576]: I0509 23:30:51.049826 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d"} err="failed to get container status \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\": rpc error: code = NotFound desc = an error occurred when try to find container \"491e4dc890449b010112ac0ef3e5d0e09552bd09c0765d77a64977bb226aa36d\": not found" May 9 23:30:51.049877 kubelet[2576]: I0509 23:30:51.049843 2576 scope.go:117] "RemoveContainer" containerID="62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139" May 9 23:30:51.050318 containerd[1452]: time="2025-05-09T23:30:51.050288440Z" level=error msg="ContainerStatus for \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\": not found" May 9 23:30:51.050528 kubelet[2576]: E0509 23:30:51.050502 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\": not found" containerID="62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139" May 9 23:30:51.050564 kubelet[2576]: I0509 23:30:51.050529 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139"} err="failed to get container status \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\": rpc error: code = NotFound desc = an error occurred when try to find container \"62e447bf5dc7a07dd25ad973f7ad4b2ee851679b0380eb17beb984ae69068139\": not found" May 9 23:30:51.050564 kubelet[2576]: I0509 23:30:51.050545 2576 scope.go:117] "RemoveContainer" containerID="75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32" May 9 23:30:51.050814 containerd[1452]: time="2025-05-09T23:30:51.050780566Z" level=error msg="ContainerStatus for \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\": not found" May 9 23:30:51.050967 kubelet[2576]: E0509 23:30:51.050948 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\": not found" containerID="75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32" May 9 23:30:51.051007 kubelet[2576]: I0509 23:30:51.050971 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32"} err="failed to get container status \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\": rpc error: code = NotFound desc = an error occurred when try to find container \"75a6efcede63b8fb9cc2393f802d8292ea41683f123b448e047b35c0819c4c32\": not found" May 9 23:30:51.051007 kubelet[2576]: I0509 23:30:51.050986 2576 scope.go:117] "RemoveContainer" containerID="46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d" May 9 23:30:51.051219 containerd[1452]: time="2025-05-09T23:30:51.051187771Z" level=error msg="ContainerStatus for \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\": not found" May 9 23:30:51.051351 kubelet[2576]: E0509 23:30:51.051330 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\": not found" containerID="46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d" May 9 23:30:51.051381 kubelet[2576]: I0509 23:30:51.051358 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d"} err="failed to get container status \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\": rpc error: code = NotFound desc = an error occurred when try to find container \"46210b74797debbe7ca0a06cd94d9a33db2e2d1be230c5cc9737ba9cf4b6685d\": not found" May 9 23:30:51.051381 kubelet[2576]: I0509 23:30:51.051375 2576 scope.go:117] "RemoveContainer" containerID="f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61" May 9 23:30:51.053042 containerd[1452]: time="2025-05-09T23:30:51.053000192Z" level=info msg="RemoveContainer for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\"" May 9 23:30:51.055557 containerd[1452]: time="2025-05-09T23:30:51.055522461Z" level=info msg="RemoveContainer for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" returns successfully" May 9 23:30:51.055719 kubelet[2576]: I0509 23:30:51.055691 2576 scope.go:117] "RemoveContainer" containerID="f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61" May 9 23:30:51.055957 containerd[1452]: time="2025-05-09T23:30:51.055914385Z" level=error msg="ContainerStatus for \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\": not found" May 9 23:30:51.056073 kubelet[2576]: E0509 23:30:51.056041 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\": not found" containerID="f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61" May 9 23:30:51.056100 kubelet[2576]: I0509 23:30:51.056071 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61"} err="failed to get container status \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8cf563d5ba7740d7130531996d4975820079bdb9910e643f66333541ae98b61\": not found" May 9 23:30:51.653419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3015b2530fb738773c331267eb03fe55de670e3e951f72a5c905a7e08d847ee-shm.mount: Deactivated successfully. May 9 23:30:51.653521 systemd[1]: var-lib-kubelet-pods-b6c14fb3\x2daa11\x2d45a3\x2d8840\x2d665ba358b454-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hhrb.mount: Deactivated successfully. May 9 23:30:51.653575 systemd[1]: var-lib-kubelet-pods-e5235020\x2da1de\x2d4e25\x2d93f9\x2d9cab6d569c73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszq9r.mount: Deactivated successfully. May 9 23:30:51.653630 systemd[1]: var-lib-kubelet-pods-b6c14fb3\x2daa11\x2d45a3\x2d8840\x2d665ba358b454-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 23:30:51.653683 systemd[1]: var-lib-kubelet-pods-b6c14fb3\x2daa11\x2d45a3\x2d8840\x2d665ba358b454-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 23:30:51.780935 kubelet[2576]: I0509 23:30:51.780898 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c14fb3-aa11-45a3-8840-665ba358b454" path="/var/lib/kubelet/pods/b6c14fb3-aa11-45a3-8840-665ba358b454/volumes" May 9 23:30:51.781478 kubelet[2576]: I0509 23:30:51.781445 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5235020-a1de-4e25-93f9-9cab6d569c73" path="/var/lib/kubelet/pods/e5235020-a1de-4e25-93f9-9cab6d569c73/volumes" May 9 23:30:51.835068 kubelet[2576]: E0509 23:30:51.835019 2576 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 23:30:52.537311 sshd[4202]: Connection closed by 10.0.0.1 port 51970 May 9 23:30:52.539037 containerd[1452]: time="2025-05-09T23:30:52.538962288Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1746833450 nanos:694002836}" May 9 23:30:52.539455 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 9 23:30:52.552779 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:51970.service: Deactivated successfully. May 9 23:30:52.554511 systemd[1]: session-24.scope: Deactivated successfully. May 9 23:30:52.554709 systemd[1]: session-24.scope: Consumed 1.829s CPU time, 28.7M memory peak. May 9 23:30:52.555235 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. May 9 23:30:52.557176 systemd[1]: Started sshd@24-10.0.0.70:22-10.0.0.1:36036.service - OpenSSH per-connection server daemon (10.0.0.1:36036). May 9 23:30:52.558262 systemd-logind[1438]: Removed session 24. May 9 23:30:52.612050 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 36036 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:52.614192 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:52.619120 systemd-logind[1438]: New session 25 of user core. May 9 23:30:52.629027 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 23:30:53.495835 sshd[4352]: Connection closed by 10.0.0.1 port 36036 May 9 23:30:53.495815 sshd-session[4349]: pam_unix(sshd:session): session closed for user core May 9 23:30:53.503299 kubelet[2576]: I0509 23:30:53.502759 2576 memory_manager.go:355] "RemoveStaleState removing state" podUID="e5235020-a1de-4e25-93f9-9cab6d569c73" containerName="cilium-operator" May 9 23:30:53.503299 kubelet[2576]: I0509 23:30:53.502786 2576 memory_manager.go:355] "RemoveStaleState removing state" podUID="b6c14fb3-aa11-45a3-8840-665ba358b454" containerName="cilium-agent" May 9 23:30:53.511150 systemd[1]: sshd@24-10.0.0.70:22-10.0.0.1:36036.service: Deactivated successfully. May 9 23:30:53.515334 systemd[1]: session-25.scope: Deactivated successfully. May 9 23:30:53.516766 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. May 9 23:30:53.533377 systemd[1]: Started sshd@25-10.0.0.70:22-10.0.0.1:36046.service - OpenSSH per-connection server daemon (10.0.0.1:36046). May 9 23:30:53.534544 systemd-logind[1438]: Removed session 25. May 9 23:30:53.555900 systemd[1]: Created slice kubepods-burstable-podbb16cc1b_7817_4ace_a07b_8a26a36a92ce.slice - libcontainer container kubepods-burstable-podbb16cc1b_7817_4ace_a07b_8a26a36a92ce.slice. May 9 23:30:53.561828 kubelet[2576]: I0509 23:30:53.561796 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-cilium-config-path\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.561944 kubelet[2576]: I0509 23:30:53.561835 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-host-proc-sys-net\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.561944 kubelet[2576]: I0509 23:30:53.561855 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-clustermesh-secrets\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.561944 kubelet[2576]: I0509 23:30:53.561928 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fts77\" (UniqueName: \"kubernetes.io/projected/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-kube-api-access-fts77\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562041 kubelet[2576]: I0509 23:30:53.561952 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-bpf-maps\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562041 kubelet[2576]: I0509 23:30:53.561972 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-xtables-lock\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562041 kubelet[2576]: I0509 23:30:53.562002 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-cilium-ipsec-secrets\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562041 kubelet[2576]: I0509 23:30:53.562017 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-cilium-run\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562041 kubelet[2576]: I0509 23:30:53.562031 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-etc-cni-netd\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562041 kubelet[2576]: I0509 23:30:53.562067 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-lib-modules\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562346 kubelet[2576]: I0509 23:30:53.562084 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-host-proc-sys-kernel\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562346 kubelet[2576]: I0509 23:30:53.562099 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-hubble-tls\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562346 kubelet[2576]: I0509 23:30:53.562141 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-cilium-cgroup\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562346 kubelet[2576]: I0509 23:30:53.562163 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-hostproc\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.562346 kubelet[2576]: I0509 23:30:53.562178 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb16cc1b-7817-4ace-a07b-8a26a36a92ce-cni-path\") pod \"cilium-j7rbd\" (UID: \"bb16cc1b-7817-4ace-a07b-8a26a36a92ce\") " pod="kube-system/cilium-j7rbd" May 9 23:30:53.592301 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 36046 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:53.593605 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:53.597449 systemd-logind[1438]: New session 26 of user core. May 9 23:30:53.610030 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 23:30:53.659162 sshd[4366]: Connection closed by 10.0.0.1 port 36046 May 9 23:30:53.659675 sshd-session[4363]: pam_unix(sshd:session): session closed for user core May 9 23:30:53.681726 systemd[1]: sshd@25-10.0.0.70:22-10.0.0.1:36046.service: Deactivated successfully. May 9 23:30:53.683256 systemd[1]: session-26.scope: Deactivated successfully. May 9 23:30:53.684634 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. May 9 23:30:53.686061 systemd[1]: Started sshd@26-10.0.0.70:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). May 9 23:30:53.686938 systemd-logind[1438]: Removed session 26. May 9 23:30:53.740501 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:u4rrAZSuMgvZd1QBrTTW6Lv6fNorFPuJwLKuqcYrnG0 May 9 23:30:53.741900 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:30:53.746260 systemd-logind[1438]: New session 27 of user core. May 9 23:30:53.753076 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 23:30:53.774060 kubelet[2576]: I0509 23:30:53.774002 2576 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T23:30:53Z","lastTransitionTime":"2025-05-09T23:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 23:30:53.858637 kubelet[2576]: E0509 23:30:53.858310 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:53.859001 containerd[1452]: time="2025-05-09T23:30:53.858961778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7rbd,Uid:bb16cc1b-7817-4ace-a07b-8a26a36a92ce,Namespace:kube-system,Attempt:0,}" May 9 23:30:53.873358 containerd[1452]: time="2025-05-09T23:30:53.873311667Z" level=info msg="connecting to shim 6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf" address="unix:///run/containerd/s/4f4f78164cd017d9dd5cc0da9a66a1ec059e203eb35fc2efa28a58ca28a96419" namespace=k8s.io protocol=ttrpc version=3 May 9 23:30:53.896037 systemd[1]: Started cri-containerd-6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf.scope - libcontainer container 6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf. May 9 23:30:53.920051 containerd[1452]: time="2025-05-09T23:30:53.920010978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7rbd,Uid:bb16cc1b-7817-4ace-a07b-8a26a36a92ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\"" May 9 23:30:53.920703 kubelet[2576]: E0509 23:30:53.920680 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:53.922274 containerd[1452]: time="2025-05-09T23:30:53.922234684Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:30:53.928599 containerd[1452]: time="2025-05-09T23:30:53.928561119Z" level=info msg="Container 334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:53.939271 containerd[1452]: time="2025-05-09T23:30:53.939227765Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\"" May 9 23:30:53.940161 containerd[1452]: time="2025-05-09T23:30:53.940127016Z" level=info msg="StartContainer for \"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\"" May 9 23:30:53.941410 containerd[1452]: time="2025-05-09T23:30:53.941376630Z" level=info msg="connecting to shim 334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8" address="unix:///run/containerd/s/4f4f78164cd017d9dd5cc0da9a66a1ec059e203eb35fc2efa28a58ca28a96419" protocol=ttrpc version=3 May 9 23:30:53.964096 systemd[1]: Started cri-containerd-334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8.scope - libcontainer container 334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8. May 9 23:30:53.991612 containerd[1452]: time="2025-05-09T23:30:53.991563543Z" level=info msg="StartContainer for \"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\" returns successfully" May 9 23:30:54.003796 systemd[1]: cri-containerd-334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8.scope: Deactivated successfully. May 9 23:30:54.005614 containerd[1452]: time="2025-05-09T23:30:54.005573749Z" level=info msg="received exit event container_id:\"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\" id:\"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\" pid:4444 exited_at:{seconds:1746833454 nanos:5262225}" May 9 23:30:54.006940 containerd[1452]: time="2025-05-09T23:30:54.006897125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\" id:\"334b5542fc87466febb8ce4a0202e0f98afc89282f6785ee4d3b22289f77d5b8\" pid:4444 exited_at:{seconds:1746833454 nanos:5262225}" May 9 23:30:54.019724 kubelet[2576]: E0509 23:30:54.019692 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:54.674819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155378115.mount: Deactivated successfully. May 9 23:30:55.022855 kubelet[2576]: E0509 23:30:55.022826 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:55.027425 containerd[1452]: time="2025-05-09T23:30:55.026244315Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:30:55.036552 containerd[1452]: time="2025-05-09T23:30:55.035731469Z" level=info msg="Container bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:55.043302 containerd[1452]: time="2025-05-09T23:30:55.043262560Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\"" May 9 23:30:55.043920 containerd[1452]: time="2025-05-09T23:30:55.043749326Z" level=info msg="StartContainer for \"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\"" May 9 23:30:55.046793 containerd[1452]: time="2025-05-09T23:30:55.046748682Z" level=info msg="connecting to shim bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e" address="unix:///run/containerd/s/4f4f78164cd017d9dd5cc0da9a66a1ec059e203eb35fc2efa28a58ca28a96419" protocol=ttrpc version=3 May 9 23:30:55.076044 systemd[1]: Started cri-containerd-bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e.scope - libcontainer container bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e. May 9 23:30:55.099452 containerd[1452]: time="2025-05-09T23:30:55.099405755Z" level=info msg="StartContainer for \"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\" returns successfully" May 9 23:30:55.109979 systemd[1]: cri-containerd-bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e.scope: Deactivated successfully. May 9 23:30:55.110463 containerd[1452]: time="2025-05-09T23:30:55.110311486Z" level=info msg="received exit event container_id:\"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\" id:\"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\" pid:4490 exited_at:{seconds:1746833455 nanos:110075883}" May 9 23:30:55.110463 containerd[1452]: time="2025-05-09T23:30:55.110356007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\" id:\"bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e\" pid:4490 exited_at:{seconds:1746833455 nanos:110075883}" May 9 23:30:55.127382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf1edf4159e20ae193940a6f1c519b773ae78224f8e22611c308d4cfdd1a883e-rootfs.mount: Deactivated successfully. May 9 23:30:56.026721 kubelet[2576]: E0509 23:30:56.026398 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:56.031827 containerd[1452]: time="2025-05-09T23:30:56.029656825Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:30:56.073729 containerd[1452]: time="2025-05-09T23:30:56.073679639Z" level=info msg="Container 654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:56.078221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444522602.mount: Deactivated successfully. May 9 23:30:56.083130 containerd[1452]: time="2025-05-09T23:30:56.083087713Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\"" May 9 23:30:56.083746 containerd[1452]: time="2025-05-09T23:30:56.083533999Z" level=info msg="StartContainer for \"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\"" May 9 23:30:56.088780 containerd[1452]: time="2025-05-09T23:30:56.088733822Z" level=info msg="connecting to shim 654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69" address="unix:///run/containerd/s/4f4f78164cd017d9dd5cc0da9a66a1ec059e203eb35fc2efa28a58ca28a96419" protocol=ttrpc version=3 May 9 23:30:56.109024 systemd[1]: Started cri-containerd-654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69.scope - libcontainer container 654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69. May 9 23:30:56.142440 systemd[1]: cri-containerd-654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69.scope: Deactivated successfully. May 9 23:30:56.143425 containerd[1452]: time="2025-05-09T23:30:56.143388965Z" level=info msg="received exit event container_id:\"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\" id:\"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\" pid:4534 exited_at:{seconds:1746833456 nanos:143020760}" May 9 23:30:56.143809 containerd[1452]: time="2025-05-09T23:30:56.143784170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\" id:\"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\" pid:4534 exited_at:{seconds:1746833456 nanos:143020760}" May 9 23:30:56.150229 containerd[1452]: time="2025-05-09T23:30:56.150192407Z" level=info msg="StartContainer for \"654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69\" returns successfully" May 9 23:30:56.163522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-654bda7d2a339f36bd170064e15ad27e7e44096bc738db1dcb8c9302a6717f69-rootfs.mount: Deactivated successfully. May 9 23:30:56.836451 kubelet[2576]: E0509 23:30:56.836415 2576 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 23:30:57.035677 kubelet[2576]: E0509 23:30:57.035592 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:57.038298 containerd[1452]: time="2025-05-09T23:30:57.037966181Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:30:57.048588 containerd[1452]: time="2025-05-09T23:30:57.047972743Z" level=info msg="Container 26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:57.054342 containerd[1452]: time="2025-05-09T23:30:57.054289260Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\"" May 9 23:30:57.055041 containerd[1452]: time="2025-05-09T23:30:57.055008789Z" level=info msg="StartContainer for \"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\"" May 9 23:30:57.055825 containerd[1452]: time="2025-05-09T23:30:57.055789279Z" level=info msg="connecting to shim 26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61" address="unix:///run/containerd/s/4f4f78164cd017d9dd5cc0da9a66a1ec059e203eb35fc2efa28a58ca28a96419" protocol=ttrpc version=3 May 9 23:30:57.084068 systemd[1]: Started cri-containerd-26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61.scope - libcontainer container 26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61. May 9 23:30:57.105260 systemd[1]: cri-containerd-26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61.scope: Deactivated successfully. May 9 23:30:57.106318 containerd[1452]: time="2025-05-09T23:30:57.106271696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\" id:\"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\" pid:4571 exited_at:{seconds:1746833457 nanos:105560808}" May 9 23:30:57.106949 containerd[1452]: time="2025-05-09T23:30:57.106796063Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb16cc1b_7817_4ace_a07b_8a26a36a92ce.slice/cri-containerd-26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61.scope/memory.events\": no such file or directory" May 9 23:30:57.107958 containerd[1452]: time="2025-05-09T23:30:57.107921276Z" level=info msg="received exit event container_id:\"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\" id:\"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\" pid:4571 exited_at:{seconds:1746833457 nanos:105560808}" May 9 23:30:57.115538 containerd[1452]: time="2025-05-09T23:30:57.115489769Z" level=info msg="StartContainer for \"26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61\" returns successfully" May 9 23:30:57.127334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26d1940334241b3d8813a2c9bb45bd26a53cfbde2801a1453b67299bbaaa2c61-rootfs.mount: Deactivated successfully. May 9 23:30:58.036874 kubelet[2576]: E0509 23:30:58.036817 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:58.039452 containerd[1452]: time="2025-05-09T23:30:58.038858268Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:30:58.047390 containerd[1452]: time="2025-05-09T23:30:58.047349773Z" level=info msg="Container 09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141: CDI devices from CRI Config.CDIDevices: []" May 9 23:30:58.054072 containerd[1452]: time="2025-05-09T23:30:58.054025655Z" level=info msg="CreateContainer within sandbox \"6dc991d82a36b80c3c801a8b3a8076d195aac8421355632e2f79f6f817fe7fbf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\"" May 9 23:30:58.054902 containerd[1452]: time="2025-05-09T23:30:58.054511941Z" level=info msg="StartContainer for \"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\"" May 9 23:30:58.055466 containerd[1452]: time="2025-05-09T23:30:58.055441913Z" level=info msg="connecting to shim 09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141" address="unix:///run/containerd/s/4f4f78164cd017d9dd5cc0da9a66a1ec059e203eb35fc2efa28a58ca28a96419" protocol=ttrpc version=3 May 9 23:30:58.087029 systemd[1]: Started cri-containerd-09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141.scope - libcontainer container 09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141. May 9 23:30:58.118173 containerd[1452]: time="2025-05-09T23:30:58.118072685Z" level=info msg="StartContainer for \"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\" returns successfully" May 9 23:30:58.174030 containerd[1452]: time="2025-05-09T23:30:58.173903214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\" id:\"8603f89e414ca2e113895654a8ca2539eb6d452cbc3b4209f1f14a4489144439\" pid:4640 exited_at:{seconds:1746833458 nanos:173598610}" May 9 23:30:58.373374 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 9 23:30:59.043024 kubelet[2576]: E0509 23:30:59.042993 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:30:59.781410 kubelet[2576]: E0509 23:30:59.781358 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:00.046983 kubelet[2576]: E0509 23:31:00.046383 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:00.218790 containerd[1452]: time="2025-05-09T23:31:00.218735607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\" id:\"2cde40b6e35dc0eb68ce9c020491bddcc607f0e3fcb3a7e77c3f258ac196190d\" pid:4806 exit_status:1 exited_at:{seconds:1746833460 nanos:218404203}" May 9 23:31:01.245283 systemd-networkd[1391]: lxc_health: Link UP May 9 23:31:01.248843 systemd-networkd[1391]: lxc_health: Gained carrier May 9 23:31:01.863157 kubelet[2576]: E0509 23:31:01.863118 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:01.879623 kubelet[2576]: I0509 23:31:01.879550 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7rbd" podStartSLOduration=8.879534799 podStartE2EDuration="8.879534799s" podCreationTimestamp="2025-05-09 23:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:30:59.057432955 +0000 UTC m=+87.385960901" watchObservedRunningTime="2025-05-09 23:31:01.879534799 +0000 UTC m=+90.208062745" May 9 23:31:02.051534 kubelet[2576]: E0509 23:31:02.051484 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:02.314012 systemd-networkd[1391]: lxc_health: Gained IPv6LL May 9 23:31:02.367206 containerd[1452]: time="2025-05-09T23:31:02.367077379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\" id:\"5b345f1db059bc85ce69734660c69aa5045ab69b7248bf1ae6385298abb237ba\" pid:5178 exited_at:{seconds:1746833462 nanos:366555132}" May 9 23:31:02.779593 kubelet[2576]: E0509 23:31:02.779226 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:03.053135 kubelet[2576]: E0509 23:31:03.053037 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:04.463762 containerd[1452]: time="2025-05-09T23:31:04.463599998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\" id:\"f83777fa109ecb51983bdae1ba5e3702ed10fe4ea2bde3b41d7a356e6e9279a6\" pid:5208 exited_at:{seconds:1746833464 nanos:463210233}" May 9 23:31:06.618438 containerd[1452]: time="2025-05-09T23:31:06.618391730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ffeb44c30902f7ce4f6470b953eaf09c4f8ac62adfcd28048d2d5b7bea2141\" id:\"d5e8eea2349f3227b90608b254aa4b4b55175942e4389d7bd041c9127cb77748\" pid:5238 exited_at:{seconds:1746833466 nanos:617810082}" May 9 23:31:06.631588 sshd[4380]: Connection closed by 10.0.0.1 port 36058 May 9 23:31:06.632087 sshd-session[4377]: pam_unix(sshd:session): session closed for user core May 9 23:31:06.634994 systemd[1]: sshd@26-10.0.0.70:22-10.0.0.1:36058.service: Deactivated successfully. May 9 23:31:06.637284 systemd[1]: session-27.scope: Deactivated successfully. May 9 23:31:06.639294 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. May 9 23:31:06.640797 systemd-logind[1438]: Removed session 27. May 9 23:31:07.779905 kubelet[2576]: E0509 23:31:07.779837 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 23:31:07.780323 kubelet[2576]: E0509 23:31:07.780076 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"