May 15 23:34:41.976556 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 23:34:41.976577 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu May 15 22:10:19 -00 2025 May 15 23:34:41.976587 kernel: KASLR enabled May 15 23:34:41.976593 kernel: efi: EFI v2.7 by EDK II May 15 23:34:41.976598 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 15 23:34:41.976604 kernel: random: crng init done May 15 23:34:41.976610 kernel: secureboot: Secure boot disabled May 15 23:34:41.976616 kernel: ACPI: Early table checksum verification disabled May 15 23:34:41.976622 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 23:34:41.976629 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 23:34:41.976635 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976641 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976646 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976652 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976659 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976667 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976673 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976679 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976685 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:34:41.976691 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 23:34:41.976697 kernel: NUMA: Failed to initialise from firmware May 15 23:34:41.976703 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:34:41.976709 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 15 23:34:41.976715 kernel: Zone ranges: May 15 23:34:41.976721 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:34:41.976729 kernel: DMA32 empty May 15 23:34:41.976735 kernel: Normal empty May 15 23:34:41.976741 kernel: Movable zone start for each node May 15 23:34:41.976746 kernel: Early memory node ranges May 15 23:34:41.976753 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 15 23:34:41.976759 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 15 23:34:41.976765 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 15 23:34:41.976771 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 23:34:41.976778 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 23:34:41.976784 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 23:34:41.976790 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 23:34:41.976796 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 23:34:41.976803 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 23:34:41.976809 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:34:41.976827 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 23:34:41.976851 kernel: psci: probing for conduit method from ACPI. May 15 23:34:41.976857 kernel: psci: PSCIv1.1 detected in firmware. May 15 23:34:41.976866 kernel: psci: Using standard PSCI v0.2 function IDs May 15 23:34:41.976873 kernel: psci: Trusted OS migration not required May 15 23:34:41.976880 kernel: psci: SMC Calling Convention v1.1 May 15 23:34:41.976887 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 23:34:41.976894 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 23:34:41.976900 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 23:34:41.976907 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 23:34:41.976913 kernel: Detected PIPT I-cache on CPU0 May 15 23:34:41.976920 kernel: CPU features: detected: GIC system register CPU interface May 15 23:34:41.976927 kernel: CPU features: detected: Hardware dirty bit management May 15 23:34:41.976933 kernel: CPU features: detected: Spectre-v4 May 15 23:34:41.976941 kernel: CPU features: detected: Spectre-BHB May 15 23:34:41.976947 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 23:34:41.976954 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 23:34:41.976960 kernel: CPU features: detected: ARM erratum 1418040 May 15 23:34:41.976967 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 23:34:41.976974 kernel: alternatives: applying boot alternatives May 15 23:34:41.976982 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5842e6d9a9272dc71039ff31db7df13c5a397d9a9917b662574c24d437910f6a May 15 23:34:41.976989 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:34:41.976995 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:34:41.977002 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:34:41.977008 kernel: Fallback order for Node 0: 0 May 15 23:34:41.977016 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 23:34:41.977022 kernel: Policy zone: DMA May 15 23:34:41.977029 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:34:41.977035 kernel: software IO TLB: area num 4. May 15 23:34:41.977041 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 23:34:41.977048 kernel: Memory: 2387344K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184944K reserved, 0K cma-reserved) May 15 23:34:41.977055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:34:41.977061 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:34:41.977068 kernel: rcu: RCU event tracing is enabled. May 15 23:34:41.977075 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:34:41.977096 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:34:41.977105 kernel: Tracing variant of Tasks RCU enabled. May 15 23:34:41.977113 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:34:41.977120 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:34:41.977126 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 23:34:41.977133 kernel: GICv3: 256 SPIs implemented May 15 23:34:41.977139 kernel: GICv3: 0 Extended SPIs implemented May 15 23:34:41.977145 kernel: Root IRQ handler: gic_handle_irq May 15 23:34:41.977152 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 23:34:41.977158 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 23:34:41.977165 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 23:34:41.977171 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 23:34:41.977178 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 23:34:41.977186 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 23:34:41.977193 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 23:34:41.977199 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:34:41.977206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:34:41.977212 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 23:34:41.977219 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 23:34:41.977226 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 23:34:41.977232 kernel: arm-pv: using stolen time PV May 15 23:34:41.977239 kernel: Console: colour dummy device 80x25 May 15 23:34:41.977246 kernel: ACPI: Core revision 20230628 May 15 23:34:41.977253 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 23:34:41.977261 kernel: pid_max: default: 32768 minimum: 301 May 15 23:34:41.977268 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:34:41.977275 kernel: landlock: Up and running. May 15 23:34:41.977281 kernel: SELinux: Initializing. May 15 23:34:41.977288 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:34:41.977295 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:34:41.977302 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 23:34:41.977309 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:34:41.977315 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:34:41.977323 kernel: rcu: Hierarchical SRCU implementation. May 15 23:34:41.977330 kernel: rcu: Max phase no-delay instances is 400. May 15 23:34:41.977336 kernel: Platform MSI: ITS@0x8080000 domain created May 15 23:34:41.977343 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 23:34:41.977350 kernel: Remapping and enabling EFI services. May 15 23:34:41.977356 kernel: smp: Bringing up secondary CPUs ... May 15 23:34:41.977363 kernel: Detected PIPT I-cache on CPU1 May 15 23:34:41.977370 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 23:34:41.977376 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 23:34:41.977384 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:34:41.977391 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 23:34:41.977403 kernel: Detected PIPT I-cache on CPU2 May 15 23:34:41.977411 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 23:34:41.977418 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 23:34:41.977425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:34:41.977432 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 23:34:41.977439 kernel: Detected PIPT I-cache on CPU3 May 15 23:34:41.977446 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 23:34:41.977454 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 23:34:41.977462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:34:41.977469 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 23:34:41.977476 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:34:41.977482 kernel: SMP: Total of 4 processors activated. May 15 23:34:41.977490 kernel: CPU features: detected: 32-bit EL0 Support May 15 23:34:41.977497 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 23:34:41.977504 kernel: CPU features: detected: Common not Private translations May 15 23:34:41.977512 kernel: CPU features: detected: CRC32 instructions May 15 23:34:41.977520 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 23:34:41.977527 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 23:34:41.977534 kernel: CPU features: detected: LSE atomic instructions May 15 23:34:41.977541 kernel: CPU features: detected: Privileged Access Never May 15 23:34:41.977548 kernel: CPU features: detected: RAS Extension Support May 15 23:34:41.977555 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 23:34:41.977562 kernel: CPU: All CPU(s) started at EL1 May 15 23:34:41.977568 kernel: alternatives: applying system-wide alternatives May 15 23:34:41.977577 kernel: devtmpfs: initialized May 15 23:34:41.977584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:34:41.977591 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:34:41.977598 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:34:41.977605 kernel: SMBIOS 3.0.0 present. May 15 23:34:41.977612 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 23:34:41.977619 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:34:41.977626 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 23:34:41.977633 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 23:34:41.977642 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 23:34:41.977649 kernel: audit: initializing netlink subsys (disabled) May 15 23:34:41.977656 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 15 23:34:41.977663 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:34:41.977670 kernel: cpuidle: using governor menu May 15 23:34:41.977677 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 23:34:41.977684 kernel: ASID allocator initialised with 32768 entries May 15 23:34:41.977691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:34:41.977711 kernel: Serial: AMBA PL011 UART driver May 15 23:34:41.977719 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 23:34:41.977726 kernel: Modules: 0 pages in range for non-PLT usage May 15 23:34:41.977733 kernel: Modules: 509232 pages in range for PLT usage May 15 23:34:41.977740 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:34:41.977747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:34:41.977754 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 23:34:41.977761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 23:34:41.977768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:34:41.977775 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:34:41.977784 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 23:34:41.977791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 23:34:41.977798 kernel: ACPI: Added _OSI(Module Device) May 15 23:34:41.977818 kernel: ACPI: Added _OSI(Processor Device) May 15 23:34:41.977825 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:34:41.977833 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:34:41.977861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:34:41.977868 kernel: ACPI: Interpreter enabled May 15 23:34:41.977875 kernel: ACPI: Using GIC for interrupt routing May 15 23:34:41.977882 kernel: ACPI: MCFG table detected, 1 entries May 15 23:34:41.977894 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 23:34:41.977901 kernel: printk: console [ttyAMA0] enabled May 15 23:34:41.977908 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:34:41.978042 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:34:41.978136 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 23:34:41.978208 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 23:34:41.978272 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 23:34:41.978340 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 23:34:41.978349 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 23:34:41.978356 kernel: PCI host bridge to bus 0000:00 May 15 23:34:41.978440 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 23:34:41.978502 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 23:34:41.978562 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 23:34:41.978622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:34:41.978706 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 23:34:41.978780 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:34:41.978847 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 23:34:41.978913 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 23:34:41.978978 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:34:41.979043 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:34:41.979124 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 23:34:41.979200 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 23:34:41.979260 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 23:34:41.979332 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 23:34:41.979389 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 23:34:41.979399 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 23:34:41.979406 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 23:34:41.979413 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 23:34:41.979422 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 23:34:41.979429 kernel: iommu: Default domain type: Translated May 15 23:34:41.979436 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 23:34:41.979443 kernel: efivars: Registered efivars operations May 15 23:34:41.979450 kernel: vgaarb: loaded May 15 23:34:41.979457 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 23:34:41.979464 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:34:41.979471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:34:41.979478 kernel: pnp: PnP ACPI init May 15 23:34:41.979556 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 23:34:41.979566 kernel: pnp: PnP ACPI: found 1 devices May 15 23:34:41.979573 kernel: NET: Registered PF_INET protocol family May 15 23:34:41.979580 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:34:41.979587 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:34:41.979594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:34:41.979601 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:34:41.979608 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:34:41.979621 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:34:41.979628 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:34:41.979635 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:34:41.979642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:34:41.979653 kernel: PCI: CLS 0 bytes, default 64 May 15 23:34:41.979660 kernel: kvm [1]: HYP mode not available May 15 23:34:41.979667 kernel: Initialise system trusted keyrings May 15 23:34:41.979674 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:34:41.979681 kernel: Key type asymmetric registered May 15 23:34:41.979689 kernel: Asymmetric key parser 'x509' registered May 15 23:34:41.979696 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 23:34:41.979703 kernel: io scheduler mq-deadline registered May 15 23:34:41.979710 kernel: io scheduler kyber registered May 15 23:34:41.979717 kernel: io scheduler bfq registered May 15 23:34:41.979724 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 23:34:41.979731 kernel: ACPI: button: Power Button [PWRB] May 15 23:34:41.979738 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 23:34:41.979820 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 23:34:41.979832 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:34:41.979839 kernel: thunder_xcv, ver 1.0 May 15 23:34:41.979846 kernel: thunder_bgx, ver 1.0 May 15 23:34:41.979853 kernel: nicpf, ver 1.0 May 15 23:34:41.979860 kernel: nicvf, ver 1.0 May 15 23:34:41.979937 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 23:34:41.979999 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T23:34:41 UTC (1747352081) May 15 23:34:41.980008 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 23:34:41.980018 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 23:34:41.980025 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 23:34:41.980032 kernel: watchdog: Hard watchdog permanently disabled May 15 23:34:41.980039 kernel: NET: Registered PF_INET6 protocol family May 15 23:34:41.980046 kernel: Segment Routing with IPv6 May 15 23:34:41.980052 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:34:41.980059 kernel: NET: Registered PF_PACKET protocol family May 15 23:34:41.980066 kernel: Key type dns_resolver registered May 15 23:34:41.980073 kernel: registered taskstats version 1 May 15 23:34:41.980085 kernel: Loading compiled-in X.509 certificates May 15 23:34:41.980106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 06f4063ae17661ba03d0a772a07398655eacda2e' May 15 23:34:41.980113 kernel: Key type .fscrypt registered May 15 23:34:41.980120 kernel: Key type fscrypt-provisioning registered May 15 23:34:41.980127 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:34:41.980134 kernel: ima: Allocated hash algorithm: sha1 May 15 23:34:41.980141 kernel: ima: No architecture policies found May 15 23:34:41.980148 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 23:34:41.980155 kernel: clk: Disabling unused clocks May 15 23:34:41.980164 kernel: Freeing unused kernel memory: 38464K May 15 23:34:41.980170 kernel: Run /init as init process May 15 23:34:41.980177 kernel: with arguments: May 15 23:34:41.980184 kernel: /init May 15 23:34:41.980191 kernel: with environment: May 15 23:34:41.980197 kernel: HOME=/ May 15 23:34:41.980204 kernel: TERM=linux May 15 23:34:41.980211 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:34:41.980219 systemd[1]: Successfully made /usr/ read-only. May 15 23:34:41.980230 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:34:41.980238 systemd[1]: Detected virtualization kvm. May 15 23:34:41.980246 systemd[1]: Detected architecture arm64. May 15 23:34:41.980253 systemd[1]: Running in initrd. May 15 23:34:41.980260 systemd[1]: No hostname configured, using default hostname. May 15 23:34:41.980268 systemd[1]: Hostname set to . May 15 23:34:41.980275 systemd[1]: Initializing machine ID from VM UUID. May 15 23:34:41.980284 systemd[1]: Queued start job for default target initrd.target. May 15 23:34:41.980291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:34:41.980299 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:34:41.980307 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:34:41.980314 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:34:41.980322 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:34:41.980330 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:34:41.980340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:34:41.980348 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:34:41.980355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:34:41.980363 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:34:41.980370 systemd[1]: Reached target paths.target - Path Units. May 15 23:34:41.980377 systemd[1]: Reached target slices.target - Slice Units. May 15 23:34:41.980385 systemd[1]: Reached target swap.target - Swaps. May 15 23:34:41.980392 systemd[1]: Reached target timers.target - Timer Units. May 15 23:34:41.980399 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:34:41.980408 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:34:41.980416 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:34:41.980423 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 23:34:41.980431 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:34:41.980438 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:34:41.980445 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:34:41.980453 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:34:41.980460 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:34:41.980469 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:34:41.980477 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:34:41.980484 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:34:41.980492 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:34:41.980499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:34:41.980506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:34:41.980514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:34:41.980522 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:34:41.980531 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:34:41.980539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:34:41.980547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:34:41.980570 systemd-journald[238]: Collecting audit messages is disabled. May 15 23:34:41.980591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:34:41.980599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:34:41.980607 systemd-journald[238]: Journal started May 15 23:34:41.980631 systemd-journald[238]: Runtime Journal (/run/log/journal/3db7fd1d318642be805482929e5fc07f) is 5.9M, max 47.3M, 41.4M free. May 15 23:34:41.961652 systemd-modules-load[239]: Inserted module 'overlay' May 15 23:34:41.982583 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:34:41.983467 kernel: Bridge firewalling registered May 15 23:34:41.984017 systemd-modules-load[239]: Inserted module 'br_netfilter' May 15 23:34:41.984394 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:34:41.986292 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:34:41.990790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:34:41.992792 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:34:41.995478 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:34:42.003674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:34:42.006220 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:34:42.011304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:34:42.014003 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:34:42.015610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:34:42.019942 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:34:42.025998 dracut-cmdline[273]: dracut-dracut-053 May 15 23:34:42.034863 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5842e6d9a9272dc71039ff31db7df13c5a397d9a9917b662574c24d437910f6a May 15 23:34:42.067616 systemd-resolved[282]: Positive Trust Anchors: May 15 23:34:42.067635 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:34:42.067667 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:34:42.073178 systemd-resolved[282]: Defaulting to hostname 'linux'. May 15 23:34:42.074197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:34:42.077414 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:34:42.097112 kernel: SCSI subsystem initialized May 15 23:34:42.102108 kernel: Loading iSCSI transport class v2.0-870. May 15 23:34:42.110119 kernel: iscsi: registered transport (tcp) May 15 23:34:42.123151 kernel: iscsi: registered transport (qla4xxx) May 15 23:34:42.123193 kernel: QLogic iSCSI HBA Driver May 15 23:34:42.161711 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:34:42.163980 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:34:42.190067 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:34:42.190130 kernel: device-mapper: uevent: version 1.0.3 May 15 23:34:42.190143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:34:42.237121 kernel: raid6: neonx8 gen() 15716 MB/s May 15 23:34:42.254123 kernel: raid6: neonx4 gen() 15704 MB/s May 15 23:34:42.271116 kernel: raid6: neonx2 gen() 13114 MB/s May 15 23:34:42.288128 kernel: raid6: neonx1 gen() 10394 MB/s May 15 23:34:42.305115 kernel: raid6: int64x8 gen() 6732 MB/s May 15 23:34:42.322118 kernel: raid6: int64x4 gen() 7309 MB/s May 15 23:34:42.339116 kernel: raid6: int64x2 gen() 6077 MB/s May 15 23:34:42.356224 kernel: raid6: int64x1 gen() 5017 MB/s May 15 23:34:42.356255 kernel: raid6: using algorithm neonx8 gen() 15716 MB/s May 15 23:34:42.374198 kernel: raid6: .... xor() 11975 MB/s, rmw enabled May 15 23:34:42.374223 kernel: raid6: using neon recovery algorithm May 15 23:34:42.379442 kernel: xor: measuring software checksum speed May 15 23:34:42.379459 kernel: 8regs : 21641 MB/sec May 15 23:34:42.380110 kernel: 32regs : 21687 MB/sec May 15 23:34:42.381319 kernel: arm64_neon : 22924 MB/sec May 15 23:34:42.381330 kernel: xor: using function: arm64_neon (22924 MB/sec) May 15 23:34:42.432117 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:34:42.442477 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:34:42.444907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:34:42.469333 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 23:34:42.472927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:34:42.475789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:34:42.499960 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 15 23:34:42.523479 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:34:42.525635 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:34:42.577653 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:34:42.580189 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:34:42.603128 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:34:42.604530 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:34:42.606463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:34:42.608897 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:34:42.612712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:34:42.623109 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 23:34:42.627110 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:34:42.630335 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:34:42.630367 kernel: GPT:9289727 != 19775487 May 15 23:34:42.630381 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:34:42.633792 kernel: GPT:9289727 != 19775487 May 15 23:34:42.633821 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:34:42.633831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:34:42.631190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:34:42.637546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:34:42.637653 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:34:42.641678 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:34:42.642831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:34:42.642969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:34:42.645189 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:34:42.649001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:34:42.659548 kernel: BTRFS: device fsid 44e3c267-913e-4e36-8a01-ed9d3f105561 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (505) May 15 23:34:42.659586 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) May 15 23:34:42.668292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:34:42.677326 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:34:42.686237 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:34:42.698663 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:34:42.699892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:34:42.709820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:34:42.711713 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:34:42.713716 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:34:42.729531 disk-uuid[551]: Primary Header is updated. May 15 23:34:42.729531 disk-uuid[551]: Secondary Entries is updated. May 15 23:34:42.729531 disk-uuid[551]: Secondary Header is updated. May 15 23:34:42.733115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:34:42.737611 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:34:43.740112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:34:43.741441 disk-uuid[556]: The operation has completed successfully. May 15 23:34:43.765105 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:34:43.765210 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:34:43.799784 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:34:43.816653 sh[572]: Success May 15 23:34:43.836111 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 23:34:43.864448 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:34:43.875335 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:34:43.878008 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:34:43.887752 kernel: BTRFS info (device dm-0): first mount of filesystem 44e3c267-913e-4e36-8a01-ed9d3f105561 May 15 23:34:43.887794 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 23:34:43.887815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:34:43.889715 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:34:43.889729 kernel: BTRFS info (device dm-0): using free space tree May 15 23:34:43.893719 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:34:43.895171 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:34:43.895964 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:34:43.898901 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:34:43.920139 kernel: BTRFS info (device vda6): first mount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:34:43.920193 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:34:43.920204 kernel: BTRFS info (device vda6): using free space tree May 15 23:34:43.923119 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:34:43.928122 kernel: BTRFS info (device vda6): last unmount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:34:43.930545 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:34:43.932995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:34:44.000941 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:34:44.004826 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:34:44.040859 ignition[665]: Ignition 2.20.0 May 15 23:34:44.040867 ignition[665]: Stage: fetch-offline May 15 23:34:44.040897 ignition[665]: no configs at "/usr/lib/ignition/base.d" May 15 23:34:44.040906 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:34:44.041052 ignition[665]: parsed url from cmdline: "" May 15 23:34:44.041055 ignition[665]: no config URL provided May 15 23:34:44.041059 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:34:44.041067 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 15 23:34:44.045863 systemd-networkd[758]: lo: Link UP May 15 23:34:44.041114 ignition[665]: op(1): [started] loading QEMU firmware config module May 15 23:34:44.045867 systemd-networkd[758]: lo: Gained carrier May 15 23:34:44.041118 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:34:44.046740 systemd-networkd[758]: Enumeration completed May 15 23:34:44.046854 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:34:44.047195 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:34:44.047199 systemd-networkd[758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:34:44.059515 ignition[665]: op(1): [finished] loading QEMU firmware config module May 15 23:34:44.047927 systemd-networkd[758]: eth0: Link UP May 15 23:34:44.047930 systemd-networkd[758]: eth0: Gained carrier May 15 23:34:44.047936 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:34:44.049156 systemd[1]: Reached target network.target - Network. May 15 23:34:44.066214 systemd-networkd[758]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:34:44.102674 ignition[665]: parsing config with SHA512: ebe795f17e0e8fe37a7eb637e3f10b93c2301c2efc9e7a6ff71b1430a21ea9b363d356d556d53d1b131a72ecdcf7f90d66fdf7b511e660d0882da9cd968b83e9 May 15 23:34:44.107196 unknown[665]: fetched base config from "system" May 15 23:34:44.107206 unknown[665]: fetched user config from "qemu" May 15 23:34:44.108812 ignition[665]: fetch-offline: fetch-offline passed May 15 23:34:44.108932 ignition[665]: Ignition finished successfully May 15 23:34:44.111802 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:34:44.113481 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:34:44.114224 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:34:44.145419 ignition[770]: Ignition 2.20.0 May 15 23:34:44.145430 ignition[770]: Stage: kargs May 15 23:34:44.145590 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 15 23:34:44.145601 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:34:44.149534 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:34:44.146552 ignition[770]: kargs: kargs passed May 15 23:34:44.151618 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:34:44.146598 ignition[770]: Ignition finished successfully May 15 23:34:44.176322 ignition[778]: Ignition 2.20.0 May 15 23:34:44.176332 ignition[778]: Stage: disks May 15 23:34:44.176480 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 15 23:34:44.176489 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:34:44.177438 ignition[778]: disks: disks passed May 15 23:34:44.179412 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:34:44.177485 ignition[778]: Ignition finished successfully May 15 23:34:44.182507 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:34:44.184107 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:34:44.186230 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:34:44.188217 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:34:44.190082 systemd[1]: Reached target basic.target - Basic System. May 15 23:34:44.192771 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:34:44.221059 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:34:44.225057 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:34:44.227421 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:34:44.283105 kernel: EXT4-fs (vda9): mounted filesystem 4099475e-0c33-48d1-8a7f-66c442027985 r/w with ordered data mode. Quota mode: none. May 15 23:34:44.283601 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:34:44.284840 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:34:44.287239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:34:44.288766 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:34:44.289786 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:34:44.289827 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:34:44.289909 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:34:44.307655 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:34:44.310196 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:34:44.315797 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) May 15 23:34:44.315820 kernel: BTRFS info (device vda6): first mount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:34:44.315831 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:34:44.315842 kernel: BTRFS info (device vda6): using free space tree May 15 23:34:44.318102 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:34:44.318793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:34:44.354066 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:34:44.357342 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory May 15 23:34:44.361269 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:34:44.364746 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:34:44.431154 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:34:44.433517 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:34:44.435166 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:34:44.450135 kernel: BTRFS info (device vda6): last unmount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:34:44.468785 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:34:44.473882 ignition[909]: INFO : Ignition 2.20.0 May 15 23:34:44.473882 ignition[909]: INFO : Stage: mount May 15 23:34:44.476258 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:34:44.476258 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:34:44.476258 ignition[909]: INFO : mount: mount passed May 15 23:34:44.476258 ignition[909]: INFO : Ignition finished successfully May 15 23:34:44.476811 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:34:44.479000 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:34:45.024131 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:34:45.025580 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:34:45.042144 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (923) May 15 23:34:45.044625 kernel: BTRFS info (device vda6): first mount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:34:45.044651 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:34:45.045364 kernel: BTRFS info (device vda6): using free space tree May 15 23:34:45.048115 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:34:45.049012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:34:45.084884 ignition[940]: INFO : Ignition 2.20.0 May 15 23:34:45.084884 ignition[940]: INFO : Stage: files May 15 23:34:45.086464 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:34:45.086464 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:34:45.086464 ignition[940]: DEBUG : files: compiled without relabeling support, skipping May 15 23:34:45.089801 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:34:45.089801 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:34:45.092842 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:34:45.094312 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:34:45.095491 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:34:45.095478 unknown[940]: wrote ssh authorized keys file for user: core May 15 23:34:45.097872 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 15 23:34:45.099780 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 15 23:34:45.174190 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:34:45.363933 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 15 23:34:45.363933 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:34:45.367607 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 23:34:45.697478 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:34:45.764679 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:34:45.766546 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 15 23:34:46.056189 systemd-networkd[758]: eth0: Gained IPv6LL May 15 23:34:46.196783 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 23:34:46.591678 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 15 23:34:46.593876 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 23:34:46.593876 ignition[940]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:34:46.609421 ignition[940]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:34:46.612969 ignition[940]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:34:46.612969 ignition[940]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:34:46.612969 ignition[940]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 23:34:46.612969 ignition[940]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:34:46.612969 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:34:46.622320 ignition[940]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:34:46.622320 ignition[940]: INFO : files: files passed May 15 23:34:46.622320 ignition[940]: INFO : Ignition finished successfully May 15 23:34:46.618149 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:34:46.621234 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:34:46.634371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:34:46.637154 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:34:46.637234 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:34:46.642247 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:34:46.643886 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:34:46.643886 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:34:46.646896 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:34:46.645610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:34:46.648362 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:34:46.651155 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:34:46.704053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:34:46.704190 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:34:46.706361 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:34:46.708168 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:34:46.709919 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:34:46.710745 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:34:46.739923 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:34:46.742485 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:34:46.768442 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:34:46.770767 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:34:46.772111 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:34:46.773920 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:34:46.774076 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:34:46.776615 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:34:46.778726 systemd[1]: Stopped target basic.target - Basic System. May 15 23:34:46.780486 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:34:46.782374 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:34:46.784371 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:34:46.786442 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:34:46.788340 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:34:46.790266 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:34:46.792224 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:34:46.794036 systemd[1]: Stopped target swap.target - Swaps. May 15 23:34:46.795689 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:34:46.795837 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:34:46.798143 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:34:46.800137 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:34:46.802127 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:34:46.802235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:34:46.804274 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:34:46.804403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:34:46.807350 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:34:46.807472 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:34:46.809550 systemd[1]: Stopped target paths.target - Path Units. May 15 23:34:46.811021 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:34:46.812157 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:34:46.814208 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:34:46.815731 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:34:46.817521 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:34:46.817612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:34:46.819762 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:34:46.819862 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:34:46.821476 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:34:46.821592 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:34:46.823401 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:34:46.823514 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:34:46.825962 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:34:46.828348 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:34:46.829387 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:34:46.829512 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:34:46.831647 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:34:46.831757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:34:46.846309 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:34:46.846394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:34:46.854478 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:34:46.857018 ignition[998]: INFO : Ignition 2.20.0 May 15 23:34:46.857018 ignition[998]: INFO : Stage: umount May 15 23:34:46.858824 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:34:46.858824 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:34:46.858824 ignition[998]: INFO : umount: umount passed May 15 23:34:46.858824 ignition[998]: INFO : Ignition finished successfully May 15 23:34:46.859746 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:34:46.859876 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:34:46.864134 systemd[1]: Stopped target network.target - Network. May 15 23:34:46.865417 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:34:46.865499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:34:46.867194 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:34:46.867243 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:34:46.868939 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:34:46.868985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:34:46.870589 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:34:46.870632 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:34:46.873284 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:34:46.874988 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:34:46.877396 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:34:46.877505 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:34:46.881438 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 23:34:46.881666 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:34:46.881750 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:34:46.884437 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 23:34:46.885014 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:34:46.885066 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:34:46.887972 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:34:46.888852 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:34:46.888913 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:34:46.893065 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:34:46.893135 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:34:46.896216 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:34:46.896267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:34:46.898137 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:34:46.898187 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:34:46.901279 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:34:46.903697 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 23:34:46.903758 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 23:34:46.916782 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:34:46.916922 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:34:46.919166 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:34:46.919275 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:34:46.925014 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:34:46.925107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:34:46.926638 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:34:46.926673 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:34:46.928593 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:34:46.928657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:34:46.931515 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:34:46.931559 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:34:46.934392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:34:46.934441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:34:46.937877 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:34:46.939160 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:34:46.939218 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:34:46.942052 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:34:46.942135 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:34:46.944155 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:34:46.944204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:34:46.946213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:34:46.946260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:34:46.949621 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 23:34:46.949675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 23:34:46.951275 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:34:46.951392 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:34:46.953563 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:34:46.953662 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:34:46.955738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:34:46.955843 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:34:46.958616 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:34:46.960827 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:34:46.979798 systemd[1]: Switching root. May 15 23:34:47.009382 systemd-journald[238]: Journal stopped May 15 23:34:47.782379 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 15 23:34:47.782443 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:34:47.782456 kernel: SELinux: policy capability open_perms=1 May 15 23:34:47.782466 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:34:47.782476 kernel: SELinux: policy capability always_check_network=0 May 15 23:34:47.782486 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:34:47.782503 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:34:47.782513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:34:47.782522 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:34:47.782532 kernel: audit: type=1403 audit(1747352087.181:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:34:47.782547 systemd[1]: Successfully loaded SELinux policy in 32.218ms. May 15 23:34:47.782563 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.080ms. May 15 23:34:47.782574 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:34:47.782585 systemd[1]: Detected virtualization kvm. May 15 23:34:47.782596 systemd[1]: Detected architecture arm64. May 15 23:34:47.782608 systemd[1]: Detected first boot. May 15 23:34:47.782618 systemd[1]: Initializing machine ID from VM UUID. May 15 23:34:47.782629 zram_generator::config[1045]: No configuration found. May 15 23:34:47.782640 kernel: NET: Registered PF_VSOCK protocol family May 15 23:34:47.782649 systemd[1]: Populated /etc with preset unit settings. May 15 23:34:47.782660 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 23:34:47.782671 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:34:47.782681 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:34:47.782693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:34:47.782704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:34:47.782714 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:34:47.782725 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:34:47.782735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:34:47.782748 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:34:47.782759 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:34:47.782770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:34:47.782780 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:34:47.782793 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:34:47.782803 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:34:47.782814 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:34:47.782824 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:34:47.782835 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:34:47.782845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:34:47.782856 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 23:34:47.782866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:34:47.782877 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:34:47.782889 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:34:47.782899 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:34:47.782910 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:34:47.782920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:34:47.782993 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:34:47.783010 systemd[1]: Reached target slices.target - Slice Units. May 15 23:34:47.783021 systemd[1]: Reached target swap.target - Swaps. May 15 23:34:47.783032 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:34:47.783048 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:34:47.783058 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 23:34:47.783076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:34:47.783099 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:34:47.783143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:34:47.783156 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:34:47.783168 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:34:47.783206 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:34:47.783220 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:34:47.783234 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:34:47.783245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:34:47.783255 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:34:47.783267 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:34:47.783277 systemd[1]: Reached target machines.target - Containers. May 15 23:34:47.783288 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:34:47.783298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:34:47.783309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:34:47.783321 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:34:47.783332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:34:47.783342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:34:47.783353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:34:47.783365 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:34:47.783376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:34:47.783386 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:34:47.783396 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:34:47.783407 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:34:47.783419 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:34:47.783430 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:34:47.783441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:34:47.783451 kernel: loop: module loaded May 15 23:34:47.783461 kernel: ACPI: bus type drm_connector registered May 15 23:34:47.783471 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:34:47.783481 kernel: fuse: init (API version 7.39) May 15 23:34:47.783491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:34:47.783503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:34:47.783514 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:34:47.783525 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 23:34:47.783540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:34:47.783550 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:34:47.783561 systemd[1]: Stopped verity-setup.service. May 15 23:34:47.783598 systemd-journald[1124]: Collecting audit messages is disabled. May 15 23:34:47.783621 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:34:47.783634 systemd-journald[1124]: Journal started May 15 23:34:47.783656 systemd-journald[1124]: Runtime Journal (/run/log/journal/3db7fd1d318642be805482929e5fc07f) is 5.9M, max 47.3M, 41.4M free. May 15 23:34:47.567194 systemd[1]: Queued start job for default target multi-user.target. May 15 23:34:47.579052 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:34:47.579433 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:34:47.786428 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:34:47.787046 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:34:47.788930 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:34:47.790046 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:34:47.791315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:34:47.792518 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:34:47.793738 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:34:47.795181 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:34:47.796639 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:34:47.796810 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:34:47.798231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:34:47.798383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:34:47.799752 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:34:47.799923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:34:47.802456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:34:47.802704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:34:47.804291 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:34:47.804541 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:34:47.805889 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:34:47.806162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:34:47.807544 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:34:47.808993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:34:47.810513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:34:47.812225 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 23:34:47.824580 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:34:47.827222 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:34:47.829239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:34:47.830384 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:34:47.830423 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:34:47.832298 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 23:34:47.844832 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:34:47.846865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:34:47.848039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:34:47.850232 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:34:47.852186 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:34:47.853404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:34:47.854162 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:34:47.855662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:34:47.856559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:34:47.858600 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:34:47.861319 systemd-journald[1124]: Time spent on flushing to /var/log/journal/3db7fd1d318642be805482929e5fc07f is 17.688ms for 874 entries. May 15 23:34:47.861319 systemd-journald[1124]: System Journal (/var/log/journal/3db7fd1d318642be805482929e5fc07f) is 8M, max 195.6M, 187.6M free. May 15 23:34:47.897772 systemd-journald[1124]: Received client request to flush runtime journal. May 15 23:34:47.897823 kernel: loop0: detected capacity change from 0 to 211168 May 15 23:34:47.863336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:34:47.866173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:34:47.867599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:34:47.868957 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:34:47.870412 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:34:47.876449 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:34:47.895941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:34:47.904129 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:34:47.906237 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:34:47.904612 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. May 15 23:34:47.904622 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. May 15 23:34:47.906754 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:34:47.909719 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 23:34:47.909790 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:34:47.912300 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 23:34:47.915166 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:34:47.925695 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:34:47.934671 kernel: loop1: detected capacity change from 0 to 126448 May 15 23:34:47.938758 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 23:34:47.951696 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:34:47.954565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:34:47.971111 kernel: loop2: detected capacity change from 0 to 103832 May 15 23:34:47.977492 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 15 23:34:47.977537 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 15 23:34:47.982333 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:34:48.008391 kernel: loop3: detected capacity change from 0 to 211168 May 15 23:34:48.014192 kernel: loop4: detected capacity change from 0 to 126448 May 15 23:34:48.019307 kernel: loop5: detected capacity change from 0 to 103832 May 15 23:34:48.023342 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:34:48.023707 (sd-merge)[1189]: Merged extensions into '/usr'. May 15 23:34:48.027175 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:34:48.027191 systemd[1]: Reloading... May 15 23:34:48.081782 zram_generator::config[1214]: No configuration found. May 15 23:34:48.141996 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:34:48.174508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:34:48.224073 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:34:48.224258 systemd[1]: Reloading finished in 196 ms. May 15 23:34:48.242109 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:34:48.243479 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:34:48.259312 systemd[1]: Starting ensure-sysext.service... May 15 23:34:48.260984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:34:48.272678 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... May 15 23:34:48.272692 systemd[1]: Reloading... May 15 23:34:48.280252 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:34:48.280454 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:34:48.281127 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:34:48.281330 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 15 23:34:48.281377 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 15 23:34:48.291700 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:34:48.291716 systemd-tmpfiles[1252]: Skipping /boot May 15 23:34:48.300347 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:34:48.300362 systemd-tmpfiles[1252]: Skipping /boot May 15 23:34:48.322116 zram_generator::config[1284]: No configuration found. May 15 23:34:48.406543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:34:48.456541 systemd[1]: Reloading finished in 183 ms. May 15 23:34:48.469749 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:34:48.486422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:34:48.494085 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:34:48.496376 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:34:48.505037 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:34:48.512436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:34:48.514921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:34:48.519811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:34:48.528946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:34:48.531506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:34:48.534385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:34:48.537346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:34:48.539048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:34:48.539186 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:34:48.540751 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:34:48.544125 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:34:48.545968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:34:48.546158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:34:48.547645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:34:48.547792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:34:48.549629 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:34:48.549776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:34:48.557590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:34:48.558962 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:34:48.562229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:34:48.564374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:34:48.565548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:34:48.565727 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:34:48.566022 systemd-udevd[1329]: Using default interface naming scheme 'v255'. May 15 23:34:48.568653 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:34:48.572616 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:34:48.586025 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:34:48.587614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:34:48.589126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:34:48.591229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:34:48.591380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:34:48.600318 augenrules[1360]: No rules May 15 23:34:48.604498 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:34:48.606148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:34:48.609980 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:34:48.610196 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:34:48.614044 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:34:48.614217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:34:48.616328 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:34:48.626653 systemd[1]: Finished ensure-sysext.service. May 15 23:34:48.632770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:34:48.634256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:34:48.636450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:34:48.636494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:34:48.639299 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:34:48.643238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:34:48.643310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:34:48.646010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:34:48.647170 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:34:48.647638 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 23:34:48.653624 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:34:48.653814 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:34:48.696977 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1369) May 15 23:34:48.703742 systemd-resolved[1322]: Positive Trust Anchors: May 15 23:34:48.703762 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:34:48.703794 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:34:48.712982 systemd-resolved[1322]: Defaulting to hostname 'linux'. May 15 23:34:48.714662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:34:48.716013 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:34:48.743928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:34:48.745578 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:34:48.746911 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:34:48.749681 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:34:48.756957 systemd-networkd[1396]: lo: Link UP May 15 23:34:48.756964 systemd-networkd[1396]: lo: Gained carrier May 15 23:34:48.758341 systemd-networkd[1396]: Enumeration completed May 15 23:34:48.762377 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:34:48.762573 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:34:48.762582 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:34:48.763632 systemd[1]: Reached target network.target - Network. May 15 23:34:48.765725 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 23:34:48.766252 systemd-networkd[1396]: eth0: Link UP May 15 23:34:48.766260 systemd-networkd[1396]: eth0: Gained carrier May 15 23:34:48.766275 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:34:48.768236 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:34:48.781158 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:34:48.781856 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:34:48.783175 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. May 15 23:34:49.260406 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:34:49.260455 systemd-timesyncd[1398]: Initial clock synchronization to Thu 2025-05-15 23:34:49.260310 UTC. May 15 23:34:49.260700 systemd-resolved[1322]: Clock change detected. Flushing caches. May 15 23:34:49.270235 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 23:34:49.275865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:34:49.292755 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:34:49.295653 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:34:49.329293 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:34:49.342583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:34:49.368080 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:34:49.369590 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:34:49.370738 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:34:49.371905 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:34:49.373158 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:34:49.374677 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:34:49.375788 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:34:49.377019 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:34:49.378227 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:34:49.378263 systemd[1]: Reached target paths.target - Path Units. May 15 23:34:49.379161 systemd[1]: Reached target timers.target - Timer Units. May 15 23:34:49.381124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:34:49.383668 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:34:49.386790 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 23:34:49.388213 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 23:34:49.389514 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 23:34:49.394406 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:34:49.395844 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 23:34:49.398147 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:34:49.399803 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:34:49.400981 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:34:49.401912 systemd[1]: Reached target basic.target - Basic System. May 15 23:34:49.402876 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:34:49.402908 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:34:49.403738 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:34:49.406566 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:34:49.405621 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:34:49.409632 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:34:49.411440 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:34:49.412428 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:34:49.413738 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:34:49.415601 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:34:49.418022 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:34:49.422690 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:34:49.425763 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:34:49.427805 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:34:49.428197 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:34:49.430889 jq[1429]: false May 15 23:34:49.431473 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:34:49.434734 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:34:49.438187 extend-filesystems[1430]: Found loop3 May 15 23:34:49.452740 extend-filesystems[1430]: Found loop4 May 15 23:34:49.452740 extend-filesystems[1430]: Found loop5 May 15 23:34:49.452740 extend-filesystems[1430]: Found vda May 15 23:34:49.452740 extend-filesystems[1430]: Found vda1 May 15 23:34:49.452740 extend-filesystems[1430]: Found vda2 May 15 23:34:49.452740 extend-filesystems[1430]: Found vda3 May 15 23:34:49.452740 extend-filesystems[1430]: Found usr May 15 23:34:49.452740 extend-filesystems[1430]: Found vda4 May 15 23:34:49.452740 extend-filesystems[1430]: Found vda6 May 15 23:34:49.452740 extend-filesystems[1430]: Found vda7 May 15 23:34:49.452740 extend-filesystems[1430]: Found vda9 May 15 23:34:49.452740 extend-filesystems[1430]: Checking size of /dev/vda9 May 15 23:34:49.490377 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:34:49.490402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1368) May 15 23:34:49.441191 dbus-daemon[1428]: [system] SELinux support is enabled May 15 23:34:49.443129 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:34:49.498881 extend-filesystems[1430]: Resized partition /dev/vda9 May 15 23:34:49.449998 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:34:49.513767 update_engine[1438]: I20250515 23:34:49.479300 1438 main.cc:92] Flatcar Update Engine starting May 15 23:34:49.513767 update_engine[1438]: I20250515 23:34:49.484335 1438 update_check_scheduler.cc:74] Next update check in 3m54s May 15 23:34:49.514062 extend-filesystems[1452]: resize2fs 1.47.2 (1-Jan-2025) May 15 23:34:49.454443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:34:49.519833 jq[1441]: true May 15 23:34:49.454648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:34:49.454916 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:34:49.455065 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:34:49.520224 tar[1453]: linux-arm64/LICENSE May 15 23:34:49.520224 tar[1453]: linux-arm64/helm May 15 23:34:49.468316 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:34:49.522472 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:34:49.548742 jq[1454]: true May 15 23:34:49.468500 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:34:49.549086 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:34:49.549086 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:34:49.549086 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:34:49.493384 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:34:49.557575 extend-filesystems[1430]: Resized filesystem in /dev/vda9 May 15 23:34:49.511325 systemd[1]: Started update-engine.service - Update Engine. May 15 23:34:49.558572 bash[1480]: Updated "/home/core/.ssh/authorized_keys" May 15 23:34:49.516906 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:34:49.516929 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:34:49.518513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:34:49.518541 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:34:49.523641 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:34:49.543241 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:34:49.543435 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:34:49.559304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:34:49.568099 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:34:49.570302 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (Power Button) May 15 23:34:49.570712 systemd-logind[1436]: New seat seat0. May 15 23:34:49.572278 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:34:49.627718 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:34:49.722559 containerd[1455]: time="2025-05-15T23:34:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 23:34:49.725424 containerd[1455]: time="2025-05-15T23:34:49.725387268Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 15 23:34:49.734622 containerd[1455]: time="2025-05-15T23:34:49.734586508Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.44µs" May 15 23:34:49.734622 containerd[1455]: time="2025-05-15T23:34:49.734618788Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 23:34:49.734695 containerd[1455]: time="2025-05-15T23:34:49.734637468Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 23:34:49.734781 containerd[1455]: time="2025-05-15T23:34:49.734764268Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 23:34:49.734808 containerd[1455]: time="2025-05-15T23:34:49.734790708Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 23:34:49.734827 containerd[1455]: time="2025-05-15T23:34:49.734814668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 23:34:49.734887 containerd[1455]: time="2025-05-15T23:34:49.734861108Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 23:34:49.734887 containerd[1455]: time="2025-05-15T23:34:49.734875908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 23:34:49.735164 containerd[1455]: time="2025-05-15T23:34:49.735125068Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 23:34:49.735164 containerd[1455]: time="2025-05-15T23:34:49.735146268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 23:34:49.735164 containerd[1455]: time="2025-05-15T23:34:49.735157108Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 23:34:49.735164 containerd[1455]: time="2025-05-15T23:34:49.735165468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 23:34:49.735250 containerd[1455]: time="2025-05-15T23:34:49.735229108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 23:34:49.735457 containerd[1455]: time="2025-05-15T23:34:49.735436708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 23:34:49.735485 containerd[1455]: time="2025-05-15T23:34:49.735473868Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 23:34:49.735514 containerd[1455]: time="2025-05-15T23:34:49.735484228Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 23:34:49.736454 containerd[1455]: time="2025-05-15T23:34:49.736427228Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 23:34:49.736721 containerd[1455]: time="2025-05-15T23:34:49.736702668Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 23:34:49.736804 containerd[1455]: time="2025-05-15T23:34:49.736784188Z" level=info msg="metadata content store policy set" policy=shared May 15 23:34:49.739379 containerd[1455]: time="2025-05-15T23:34:49.739350668Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 23:34:49.739432 containerd[1455]: time="2025-05-15T23:34:49.739394348Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 23:34:49.739432 containerd[1455]: time="2025-05-15T23:34:49.739408948Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 23:34:49.739432 containerd[1455]: time="2025-05-15T23:34:49.739422588Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739435108Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739449468Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739460628Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739472268Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739482268Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739492388Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739509148Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 23:34:49.739625 containerd[1455]: time="2025-05-15T23:34:49.739522468Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739635428Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739657708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739669748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739679788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739689908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739699508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739709828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739719308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739730068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739740868Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 23:34:49.739769 containerd[1455]: time="2025-05-15T23:34:49.739750388Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 23:34:49.740060 containerd[1455]: time="2025-05-15T23:34:49.739998588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 23:34:49.740060 containerd[1455]: time="2025-05-15T23:34:49.740022828Z" level=info msg="Start snapshots syncer" May 15 23:34:49.740060 containerd[1455]: time="2025-05-15T23:34:49.740049388Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 23:34:49.740326 containerd[1455]: time="2025-05-15T23:34:49.740257508Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 23:34:49.740326 containerd[1455]: time="2025-05-15T23:34:49.740302628Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 23:34:49.740627 containerd[1455]: time="2025-05-15T23:34:49.740369948Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 23:34:49.740627 containerd[1455]: time="2025-05-15T23:34:49.740463588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 23:34:49.740627 containerd[1455]: time="2025-05-15T23:34:49.740486908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 23:34:49.740627 containerd[1455]: time="2025-05-15T23:34:49.740497908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 23:34:49.740627 containerd[1455]: time="2025-05-15T23:34:49.740520628Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.740918428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.740947868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.740969228Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741020508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741041948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741057668Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741103068Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741120388Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741130348Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741143748Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741155428Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741299228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741321908Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 23:34:49.742054 containerd[1455]: time="2025-05-15T23:34:49.741398668Z" level=info msg="runtime interface created" May 15 23:34:49.742348 containerd[1455]: time="2025-05-15T23:34:49.741408108Z" level=info msg="created NRI interface" May 15 23:34:49.742348 containerd[1455]: time="2025-05-15T23:34:49.741417948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 23:34:49.742348 containerd[1455]: time="2025-05-15T23:34:49.741434028Z" level=info msg="Connect containerd service" May 15 23:34:49.742348 containerd[1455]: time="2025-05-15T23:34:49.741471508Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:34:49.742348 containerd[1455]: time="2025-05-15T23:34:49.742176268Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:34:49.846901 containerd[1455]: time="2025-05-15T23:34:49.846796228Z" level=info msg="Start subscribing containerd event" May 15 23:34:49.847056 containerd[1455]: time="2025-05-15T23:34:49.847027028Z" level=info msg="Start recovering state" May 15 23:34:49.847123 containerd[1455]: time="2025-05-15T23:34:49.847091148Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:34:49.847150 containerd[1455]: time="2025-05-15T23:34:49.847144588Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:34:49.847371 containerd[1455]: time="2025-05-15T23:34:49.847352428Z" level=info msg="Start event monitor" May 15 23:34:49.847447 containerd[1455]: time="2025-05-15T23:34:49.847432948Z" level=info msg="Start cni network conf syncer for default" May 15 23:34:49.847607 containerd[1455]: time="2025-05-15T23:34:49.847589588Z" level=info msg="Start streaming server" May 15 23:34:49.847762 containerd[1455]: time="2025-05-15T23:34:49.847747108Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 23:34:49.848021 containerd[1455]: time="2025-05-15T23:34:49.848003468Z" level=info msg="runtime interface starting up..." May 15 23:34:49.848080 containerd[1455]: time="2025-05-15T23:34:49.848068068Z" level=info msg="starting plugins..." May 15 23:34:49.848146 containerd[1455]: time="2025-05-15T23:34:49.848133108Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 23:34:49.848494 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:34:49.850221 containerd[1455]: time="2025-05-15T23:34:49.850195588Z" level=info msg="containerd successfully booted in 0.128033s" May 15 23:34:49.904709 tar[1453]: linux-arm64/README.md May 15 23:34:49.926575 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:34:50.336915 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:34:50.355261 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:34:50.360249 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:34:50.381269 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:34:50.381468 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:34:50.384224 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:34:50.406118 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:34:50.408955 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:34:50.411069 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 23:34:50.412357 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:34:50.436655 systemd-networkd[1396]: eth0: Gained IPv6LL May 15 23:34:50.438806 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:34:50.440710 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:34:50.443456 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:34:50.447150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:34:50.456456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:34:50.470859 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:34:50.471055 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:34:50.473162 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:34:50.476572 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:34:51.001676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:34:51.003275 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:34:51.006428 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:34:51.008618 systemd[1]: Startup finished in 607ms (kernel) + 5.442s (initrd) + 3.385s (userspace) = 9.435s. May 15 23:34:51.465001 kubelet[1555]: E0515 23:34:51.464875 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:34:51.467475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:34:51.467648 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:34:51.468068 systemd[1]: kubelet.service: Consumed 825ms CPU time, 259M memory peak. May 15 23:34:55.004377 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:34:55.005809 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:34778.service - OpenSSH per-connection server daemon (10.0.0.1:34778). May 15 23:34:55.079823 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 34778 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:55.081607 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:55.095043 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:34:55.095939 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:34:55.100763 systemd-logind[1436]: New session 1 of user core. May 15 23:34:55.118707 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:34:55.121854 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:34:55.136562 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:34:55.138638 systemd-logind[1436]: New session c1 of user core. May 15 23:34:55.247560 systemd[1573]: Queued start job for default target default.target. May 15 23:34:55.260432 systemd[1573]: Created slice app.slice - User Application Slice. May 15 23:34:55.260460 systemd[1573]: Reached target paths.target - Paths. May 15 23:34:55.260509 systemd[1573]: Reached target timers.target - Timers. May 15 23:34:55.261706 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:34:55.270257 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:34:55.270319 systemd[1573]: Reached target sockets.target - Sockets. May 15 23:34:55.270355 systemd[1573]: Reached target basic.target - Basic System. May 15 23:34:55.270381 systemd[1573]: Reached target default.target - Main User Target. May 15 23:34:55.270405 systemd[1573]: Startup finished in 126ms. May 15 23:34:55.270598 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:34:55.271902 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:34:55.334006 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). May 15 23:34:55.385147 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:55.386413 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:55.390714 systemd-logind[1436]: New session 2 of user core. May 15 23:34:55.406691 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:34:55.458572 sshd[1586]: Connection closed by 10.0.0.1 port 34784 May 15 23:34:55.460250 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 15 23:34:55.474507 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:34784.service: Deactivated successfully. May 15 23:34:55.475908 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:34:55.477158 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. May 15 23:34:55.478720 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:34786.service - OpenSSH per-connection server daemon (10.0.0.1:34786). May 15 23:34:55.479622 systemd-logind[1436]: Removed session 2. May 15 23:34:55.529655 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 34786 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:55.530872 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:55.535161 systemd-logind[1436]: New session 3 of user core. May 15 23:34:55.551699 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:34:55.599992 sshd[1594]: Connection closed by 10.0.0.1 port 34786 May 15 23:34:55.600479 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 15 23:34:55.610658 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:34786.service: Deactivated successfully. May 15 23:34:55.613050 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:34:55.614090 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. May 15 23:34:55.616829 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:34798.service - OpenSSH per-connection server daemon (10.0.0.1:34798). May 15 23:34:55.617713 systemd-logind[1436]: Removed session 3. May 15 23:34:55.670007 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 34798 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:55.671275 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:55.675704 systemd-logind[1436]: New session 4 of user core. May 15 23:34:55.686697 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:34:55.739351 sshd[1602]: Connection closed by 10.0.0.1 port 34798 May 15 23:34:55.739736 sshd-session[1599]: pam_unix(sshd:session): session closed for user core May 15 23:34:55.752563 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:34798.service: Deactivated successfully. May 15 23:34:55.754843 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:34:55.756159 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. May 15 23:34:55.757851 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:34804.service - OpenSSH per-connection server daemon (10.0.0.1:34804). May 15 23:34:55.758631 systemd-logind[1436]: Removed session 4. May 15 23:34:55.811896 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 34804 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:55.812851 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:55.816908 systemd-logind[1436]: New session 5 of user core. May 15 23:34:55.828716 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:34:55.893395 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:34:55.893718 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:34:55.906478 sudo[1611]: pam_unix(sudo:session): session closed for user root May 15 23:34:55.909278 sshd[1610]: Connection closed by 10.0.0.1 port 34804 May 15 23:34:55.909609 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 15 23:34:55.924408 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:34804.service: Deactivated successfully. May 15 23:34:55.925905 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:34:55.927470 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. May 15 23:34:55.931792 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:34814.service - OpenSSH per-connection server daemon (10.0.0.1:34814). May 15 23:34:55.932807 systemd-logind[1436]: Removed session 5. May 15 23:34:56.002100 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 34814 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:56.003330 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:56.007595 systemd-logind[1436]: New session 6 of user core. May 15 23:34:56.018675 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:34:56.069178 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:34:56.069475 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:34:56.075656 sudo[1621]: pam_unix(sudo:session): session closed for user root May 15 23:34:56.080115 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:34:56.080368 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:34:56.096124 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:34:56.127630 augenrules[1643]: No rules May 15 23:34:56.129001 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:34:56.129206 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:34:56.130754 sudo[1620]: pam_unix(sudo:session): session closed for user root May 15 23:34:56.132790 sshd[1619]: Connection closed by 10.0.0.1 port 34814 May 15 23:34:56.132712 sshd-session[1616]: pam_unix(sshd:session): session closed for user core May 15 23:34:56.146400 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:34814.service: Deactivated successfully. May 15 23:34:56.147857 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:34:56.151422 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. May 15 23:34:56.152756 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:34824.service - OpenSSH per-connection server daemon (10.0.0.1:34824). May 15 23:34:56.154342 systemd-logind[1436]: Removed session 6. May 15 23:34:56.199827 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 34824 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:34:56.201001 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:34:56.206255 systemd-logind[1436]: New session 7 of user core. May 15 23:34:56.215710 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:34:56.266114 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:34:56.266371 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:34:56.610114 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:34:56.628809 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:34:56.879898 dockerd[1676]: time="2025-05-15T23:34:56.879782828Z" level=info msg="Starting up" May 15 23:34:56.883201 dockerd[1676]: time="2025-05-15T23:34:56.883109028Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 23:34:56.977007 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3435343673-merged.mount: Deactivated successfully. May 15 23:34:56.994923 dockerd[1676]: time="2025-05-15T23:34:56.994716868Z" level=info msg="Loading containers: start." May 15 23:34:57.141562 kernel: Initializing XFRM netlink socket May 15 23:34:57.203521 systemd-networkd[1396]: docker0: Link UP May 15 23:34:57.260811 dockerd[1676]: time="2025-05-15T23:34:57.260707028Z" level=info msg="Loading containers: done." May 15 23:34:57.274759 dockerd[1676]: time="2025-05-15T23:34:57.274715228Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:34:57.274897 dockerd[1676]: time="2025-05-15T23:34:57.274800628Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 15 23:34:57.274989 dockerd[1676]: time="2025-05-15T23:34:57.274963268Z" level=info msg="Daemon has completed initialization" May 15 23:34:57.302375 dockerd[1676]: time="2025-05-15T23:34:57.302304628Z" level=info msg="API listen on /run/docker.sock" May 15 23:34:57.302499 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:34:57.792551 containerd[1455]: time="2025-05-15T23:34:57.792491468Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 15 23:34:58.427680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479692866.mount: Deactivated successfully. May 15 23:34:59.537489 containerd[1455]: time="2025-05-15T23:34:59.537438348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:34:59.538427 containerd[1455]: time="2025-05-15T23:34:59.537813668Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349352" May 15 23:34:59.539087 containerd[1455]: time="2025-05-15T23:34:59.538735748Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:34:59.541930 containerd[1455]: time="2025-05-15T23:34:59.541856188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:34:59.542742 containerd[1455]: time="2025-05-15T23:34:59.542687268Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 1.75015376s" May 15 23:34:59.542742 containerd[1455]: time="2025-05-15T23:34:59.542722948Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 15 23:34:59.545806 containerd[1455]: time="2025-05-15T23:34:59.545723788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 15 23:35:00.865980 containerd[1455]: time="2025-05-15T23:35:00.865767708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:00.866830 containerd[1455]: time="2025-05-15T23:35:00.866575228Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531737" May 15 23:35:00.867507 containerd[1455]: time="2025-05-15T23:35:00.867479748Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:00.870184 containerd[1455]: time="2025-05-15T23:35:00.870127548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:00.871263 containerd[1455]: time="2025-05-15T23:35:00.871219148Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.32545956s" May 15 23:35:00.871263 containerd[1455]: time="2025-05-15T23:35:00.871249388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 15 23:35:00.871953 containerd[1455]: time="2025-05-15T23:35:00.871930148Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 15 23:35:01.717990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:35:01.719904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:35:01.886018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:01.889782 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:35:01.929840 kubelet[1951]: E0515 23:35:01.929722 1951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:35:01.933019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:35:01.933155 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:35:01.933802 systemd[1]: kubelet.service: Consumed 145ms CPU time, 108.6M memory peak. May 15 23:35:02.154916 containerd[1455]: time="2025-05-15T23:35:02.154801628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:02.155797 containerd[1455]: time="2025-05-15T23:35:02.155633108Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293733" May 15 23:35:02.156987 containerd[1455]: time="2025-05-15T23:35:02.156944908Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:02.159300 containerd[1455]: time="2025-05-15T23:35:02.159268308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:02.161186 containerd[1455]: time="2025-05-15T23:35:02.161140868Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.28917932s" May 15 23:35:02.161186 containerd[1455]: time="2025-05-15T23:35:02.161178388Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 15 23:35:02.161609 containerd[1455]: time="2025-05-15T23:35:02.161587748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 15 23:35:03.235077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727531650.mount: Deactivated successfully. May 15 23:35:03.463631 containerd[1455]: time="2025-05-15T23:35:03.463574468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:03.464036 containerd[1455]: time="2025-05-15T23:35:03.463947028Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196006" May 15 23:35:03.464803 containerd[1455]: time="2025-05-15T23:35:03.464762148Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:03.466868 containerd[1455]: time="2025-05-15T23:35:03.466830948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:03.467576 containerd[1455]: time="2025-05-15T23:35:03.467542788Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.3059102s" May 15 23:35:03.467613 containerd[1455]: time="2025-05-15T23:35:03.467578108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 15 23:35:03.468168 containerd[1455]: time="2025-05-15T23:35:03.468148428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 15 23:35:04.096477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764396399.mount: Deactivated successfully. May 15 23:35:04.976100 containerd[1455]: time="2025-05-15T23:35:04.976050268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:04.976642 containerd[1455]: time="2025-05-15T23:35:04.976582508Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" May 15 23:35:04.977398 containerd[1455]: time="2025-05-15T23:35:04.977343908Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:04.980557 containerd[1455]: time="2025-05-15T23:35:04.980479988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:04.981800 containerd[1455]: time="2025-05-15T23:35:04.981660628Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.51348352s" May 15 23:35:04.981800 containerd[1455]: time="2025-05-15T23:35:04.981699588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 15 23:35:04.982612 containerd[1455]: time="2025-05-15T23:35:04.982206268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:35:05.408932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381955859.mount: Deactivated successfully. May 15 23:35:05.414428 containerd[1455]: time="2025-05-15T23:35:05.414384028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:35:05.415181 containerd[1455]: time="2025-05-15T23:35:05.414981268Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 23:35:05.415979 containerd[1455]: time="2025-05-15T23:35:05.415941108Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:35:05.417916 containerd[1455]: time="2025-05-15T23:35:05.417874548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:35:05.418807 containerd[1455]: time="2025-05-15T23:35:05.418775068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 436.53716ms" May 15 23:35:05.418807 containerd[1455]: time="2025-05-15T23:35:05.418805108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 23:35:05.419490 containerd[1455]: time="2025-05-15T23:35:05.419339148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 15 23:35:07.647701 containerd[1455]: time="2025-05-15T23:35:07.647597108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:07.648981 containerd[1455]: time="2025-05-15T23:35:07.648707508Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230165" May 15 23:35:07.649856 containerd[1455]: time="2025-05-15T23:35:07.649792708Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:07.652680 containerd[1455]: time="2025-05-15T23:35:07.652643068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:07.654814 containerd[1455]: time="2025-05-15T23:35:07.654771868Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.23539488s" May 15 23:35:07.654814 containerd[1455]: time="2025-05-15T23:35:07.654812908Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 15 23:35:12.183723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:35:12.185286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:35:12.306581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:12.310480 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:35:12.343611 kubelet[2067]: E0515 23:35:12.343558 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:35:12.346199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:35:12.346345 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:35:12.346766 systemd[1]: kubelet.service: Consumed 138ms CPU time, 108.6M memory peak. May 15 23:35:13.837264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:13.837504 systemd[1]: kubelet.service: Consumed 138ms CPU time, 108.6M memory peak. May 15 23:35:13.839652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:35:13.861076 systemd[1]: Reload requested from client PID 2082 ('systemctl') (unit session-7.scope)... May 15 23:35:13.861098 systemd[1]: Reloading... May 15 23:35:13.941582 zram_generator::config[2132]: No configuration found. May 15 23:35:14.188899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:35:14.262605 systemd[1]: Reloading finished in 401 ms. May 15 23:35:14.316556 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 23:35:14.316630 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 23:35:14.316892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:14.316943 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95M memory peak. May 15 23:35:14.319786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:35:14.432051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:14.436932 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:35:14.471741 kubelet[2172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:35:14.471741 kubelet[2172]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:35:14.471741 kubelet[2172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:35:14.471741 kubelet[2172]: I0515 23:35:14.471712 2172 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:35:16.123911 kubelet[2172]: I0515 23:35:16.123858 2172 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 15 23:35:16.123911 kubelet[2172]: I0515 23:35:16.123897 2172 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:35:16.124280 kubelet[2172]: I0515 23:35:16.124132 2172 server.go:956] "Client rotation is on, will bootstrap in background" May 15 23:35:16.180891 kubelet[2172]: I0515 23:35:16.180840 2172 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:35:16.187001 kubelet[2172]: E0515 23:35:16.186960 2172 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 15 23:35:16.195569 kubelet[2172]: I0515 23:35:16.195449 2172 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 23:35:16.198363 kubelet[2172]: I0515 23:35:16.198297 2172 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:35:16.199560 kubelet[2172]: I0515 23:35:16.199397 2172 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:35:16.199676 kubelet[2172]: I0515 23:35:16.199454 2172 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:35:16.199796 kubelet[2172]: I0515 23:35:16.199734 2172 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:35:16.199796 kubelet[2172]: I0515 23:35:16.199744 2172 container_manager_linux.go:303] "Creating device plugin manager" May 15 23:35:16.199963 kubelet[2172]: I0515 23:35:16.199946 2172 state_mem.go:36] "Initialized new in-memory state store" May 15 23:35:16.204157 kubelet[2172]: I0515 23:35:16.204125 2172 kubelet.go:480] "Attempting to sync node with API server" May 15 23:35:16.204157 kubelet[2172]: I0515 23:35:16.204156 2172 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:35:16.204240 kubelet[2172]: I0515 23:35:16.204181 2172 kubelet.go:386] "Adding apiserver pod source" May 15 23:35:16.205223 kubelet[2172]: I0515 23:35:16.205197 2172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:35:16.206601 kubelet[2172]: I0515 23:35:16.206564 2172 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 23:35:16.207288 kubelet[2172]: I0515 23:35:16.207266 2172 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 15 23:35:16.207406 kubelet[2172]: W0515 23:35:16.207393 2172 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:35:16.207765 kubelet[2172]: E0515 23:35:16.207727 2172 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 15 23:35:16.207893 kubelet[2172]: E0515 23:35:16.207863 2172 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 15 23:35:16.213979 kubelet[2172]: I0515 23:35:16.213369 2172 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:35:16.213979 kubelet[2172]: I0515 23:35:16.213423 2172 server.go:1289] "Started kubelet" May 15 23:35:16.223583 kubelet[2172]: I0515 23:35:16.221960 2172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:35:16.223583 kubelet[2172]: I0515 23:35:16.223437 2172 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.224602 2172 server.go:317] "Adding debug handlers to kubelet server" May 15 23:35:16.228227 kubelet[2172]: E0515 23:35:16.224377 2172 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd77de86d170c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:35:16.213389068 +0000 UTC m=+1.772790081,LastTimestamp:2025-05-15 23:35:16.213389068 +0000 UTC m=+1.772790081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.226010 2172 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.226314 2172 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.226513 2172 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.226994 2172 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.227116 2172 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:35:16.228227 kubelet[2172]: I0515 23:35:16.227167 2172 reconciler.go:26] "Reconciler: start to sync state" May 15 23:35:16.228227 kubelet[2172]: E0515 23:35:16.227600 2172 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 15 23:35:16.231916 kubelet[2172]: E0515 23:35:16.229613 2172 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:35:16.231916 kubelet[2172]: E0515 23:35:16.229680 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" May 15 23:35:16.231916 kubelet[2172]: E0515 23:35:16.230107 2172 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:35:16.231916 kubelet[2172]: I0515 23:35:16.231573 2172 factory.go:223] Registration of the systemd container factory successfully May 15 23:35:16.231916 kubelet[2172]: I0515 23:35:16.231684 2172 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:35:16.233069 kubelet[2172]: I0515 23:35:16.233044 2172 factory.go:223] Registration of the containerd container factory successfully May 15 23:35:16.244082 kubelet[2172]: I0515 23:35:16.244058 2172 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:35:16.244082 kubelet[2172]: I0515 23:35:16.244072 2172 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:35:16.244082 kubelet[2172]: I0515 23:35:16.244091 2172 state_mem.go:36] "Initialized new in-memory state store" May 15 23:35:16.314681 kubelet[2172]: I0515 23:35:16.314645 2172 policy_none.go:49] "None policy: Start" May 15 23:35:16.314681 kubelet[2172]: I0515 23:35:16.314677 2172 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:35:16.314681 kubelet[2172]: I0515 23:35:16.314690 2172 state_mem.go:35] "Initializing new in-memory state store" May 15 23:35:16.317460 kubelet[2172]: I0515 23:35:16.317275 2172 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 15 23:35:16.318597 kubelet[2172]: I0515 23:35:16.318564 2172 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 15 23:35:16.318597 kubelet[2172]: I0515 23:35:16.318591 2172 status_manager.go:230] "Starting to sync pod status with apiserver" May 15 23:35:16.318715 kubelet[2172]: I0515 23:35:16.318617 2172 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:35:16.318715 kubelet[2172]: I0515 23:35:16.318625 2172 kubelet.go:2436] "Starting kubelet main sync loop" May 15 23:35:16.318715 kubelet[2172]: E0515 23:35:16.318675 2172 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:35:16.319988 kubelet[2172]: E0515 23:35:16.319685 2172 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 15 23:35:16.324061 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:35:16.330030 kubelet[2172]: E0515 23:35:16.329992 2172 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:35:16.337661 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:35:16.344276 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:35:16.354688 kubelet[2172]: E0515 23:35:16.354446 2172 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 15 23:35:16.354793 kubelet[2172]: I0515 23:35:16.354693 2172 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:35:16.354793 kubelet[2172]: I0515 23:35:16.354716 2172 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:35:16.355678 kubelet[2172]: I0515 23:35:16.355338 2172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:35:16.356519 kubelet[2172]: E0515 23:35:16.356488 2172 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:35:16.356589 kubelet[2172]: E0515 23:35:16.356576 2172 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:35:16.427944 kubelet[2172]: I0515 23:35:16.427911 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46e01f3bc27bafc48103e460be5d7568-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46e01f3bc27bafc48103e460be5d7568\") " pod="kube-system/kube-apiserver-localhost" May 15 23:35:16.427944 kubelet[2172]: I0515 23:35:16.427948 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:16.428101 kubelet[2172]: I0515 23:35:16.427966 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:16.428101 kubelet[2172]: I0515 23:35:16.427986 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:16.428101 kubelet[2172]: I0515 23:35:16.428033 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 15 23:35:16.428101 kubelet[2172]: I0515 23:35:16.428058 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46e01f3bc27bafc48103e460be5d7568-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46e01f3bc27bafc48103e460be5d7568\") " pod="kube-system/kube-apiserver-localhost" May 15 23:35:16.428101 kubelet[2172]: I0515 23:35:16.428077 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46e01f3bc27bafc48103e460be5d7568-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46e01f3bc27bafc48103e460be5d7568\") " pod="kube-system/kube-apiserver-localhost" May 15 23:35:16.428216 kubelet[2172]: I0515 23:35:16.428092 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:16.428216 kubelet[2172]: I0515 23:35:16.428107 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:16.430698 kubelet[2172]: E0515 23:35:16.430654 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" May 15 23:35:16.432369 systemd[1]: Created slice kubepods-burstable-pod46e01f3bc27bafc48103e460be5d7568.slice - libcontainer container kubepods-burstable-pod46e01f3bc27bafc48103e460be5d7568.slice. May 15 23:35:16.446358 kubelet[2172]: E0515 23:35:16.446328 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:16.450635 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 15 23:35:16.457098 kubelet[2172]: I0515 23:35:16.457043 2172 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:35:16.457690 kubelet[2172]: E0515 23:35:16.457661 2172 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" May 15 23:35:16.467338 kubelet[2172]: E0515 23:35:16.467266 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:16.468781 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 15 23:35:16.470513 kubelet[2172]: E0515 23:35:16.470490 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:16.659257 kubelet[2172]: I0515 23:35:16.659157 2172 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:35:16.659576 kubelet[2172]: E0515 23:35:16.659524 2172 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" May 15 23:35:16.747555 kubelet[2172]: E0515 23:35:16.747367 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:16.748115 containerd[1455]: time="2025-05-15T23:35:16.748059268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46e01f3bc27bafc48103e460be5d7568,Namespace:kube-system,Attempt:0,}" May 15 23:35:16.767745 containerd[1455]: time="2025-05-15T23:35:16.767694028Z" level=info msg="connecting to shim 1c86292bfb3261273614e6761779fd0f251d43a5ad8407ed84c74979f7ce30ad" address="unix:///run/containerd/s/da38c660f4cc8d268d46a40973cf76f6f7f4820977ed930a8e466ca1e3be4eab" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:16.767916 kubelet[2172]: E0515 23:35:16.767741 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:16.768319 containerd[1455]: time="2025-05-15T23:35:16.768277828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 15 23:35:16.771662 kubelet[2172]: E0515 23:35:16.771616 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:16.772235 containerd[1455]: time="2025-05-15T23:35:16.772088188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 15 23:35:16.791793 systemd[1]: Started cri-containerd-1c86292bfb3261273614e6761779fd0f251d43a5ad8407ed84c74979f7ce30ad.scope - libcontainer container 1c86292bfb3261273614e6761779fd0f251d43a5ad8407ed84c74979f7ce30ad. May 15 23:35:16.804789 containerd[1455]: time="2025-05-15T23:35:16.803733148Z" level=info msg="connecting to shim 7ccadbb33e478cf2036a8378f2a7435e5a6c7cfc152cc2ee9bde3b1d43b0574d" address="unix:///run/containerd/s/554a30acfb8eaf77cf63104df436215cbd432ec3872196f70a43820bb679bef9" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:16.805723 containerd[1455]: time="2025-05-15T23:35:16.805689308Z" level=info msg="connecting to shim 81bd33e7facaf8ff6ec4b738cf93ef87523d6f517f7df2e3f16ce3d8a47b251e" address="unix:///run/containerd/s/2a4fb7ec9cd5ece9f83043c7ef1b2d432aa656246126ea6d0399fd1583d8d717" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:16.831206 kubelet[2172]: E0515 23:35:16.831169 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" May 15 23:35:16.831729 systemd[1]: Started cri-containerd-7ccadbb33e478cf2036a8378f2a7435e5a6c7cfc152cc2ee9bde3b1d43b0574d.scope - libcontainer container 7ccadbb33e478cf2036a8378f2a7435e5a6c7cfc152cc2ee9bde3b1d43b0574d. May 15 23:35:16.835352 systemd[1]: Started cri-containerd-81bd33e7facaf8ff6ec4b738cf93ef87523d6f517f7df2e3f16ce3d8a47b251e.scope - libcontainer container 81bd33e7facaf8ff6ec4b738cf93ef87523d6f517f7df2e3f16ce3d8a47b251e. May 15 23:35:16.842683 containerd[1455]: time="2025-05-15T23:35:16.842636628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46e01f3bc27bafc48103e460be5d7568,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c86292bfb3261273614e6761779fd0f251d43a5ad8407ed84c74979f7ce30ad\"" May 15 23:35:16.850225 kubelet[2172]: E0515 23:35:16.850153 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:16.855569 containerd[1455]: time="2025-05-15T23:35:16.855505748Z" level=info msg="CreateContainer within sandbox \"1c86292bfb3261273614e6761779fd0f251d43a5ad8407ed84c74979f7ce30ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:35:16.866999 containerd[1455]: time="2025-05-15T23:35:16.866937388Z" level=info msg="Container 57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:16.876096 containerd[1455]: time="2025-05-15T23:35:16.875957348Z" level=info msg="CreateContainer within sandbox \"1c86292bfb3261273614e6761779fd0f251d43a5ad8407ed84c74979f7ce30ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce\"" May 15 23:35:16.878812 containerd[1455]: time="2025-05-15T23:35:16.878771348Z" level=info msg="StartContainer for \"57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce\"" May 15 23:35:16.880045 containerd[1455]: time="2025-05-15T23:35:16.879941228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"81bd33e7facaf8ff6ec4b738cf93ef87523d6f517f7df2e3f16ce3d8a47b251e\"" May 15 23:35:16.880045 containerd[1455]: time="2025-05-15T23:35:16.879983068Z" level=info msg="connecting to shim 57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce" address="unix:///run/containerd/s/da38c660f4cc8d268d46a40973cf76f6f7f4820977ed930a8e466ca1e3be4eab" protocol=ttrpc version=3 May 15 23:35:16.880862 kubelet[2172]: E0515 23:35:16.880836 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:16.883416 containerd[1455]: time="2025-05-15T23:35:16.883320948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ccadbb33e478cf2036a8378f2a7435e5a6c7cfc152cc2ee9bde3b1d43b0574d\"" May 15 23:35:16.884165 kubelet[2172]: E0515 23:35:16.884143 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:16.884795 containerd[1455]: time="2025-05-15T23:35:16.884765028Z" level=info msg="CreateContainer within sandbox \"81bd33e7facaf8ff6ec4b738cf93ef87523d6f517f7df2e3f16ce3d8a47b251e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:35:16.888291 containerd[1455]: time="2025-05-15T23:35:16.888243508Z" level=info msg="CreateContainer within sandbox \"7ccadbb33e478cf2036a8378f2a7435e5a6c7cfc152cc2ee9bde3b1d43b0574d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:35:16.893704 containerd[1455]: time="2025-05-15T23:35:16.893657468Z" level=info msg="Container 05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:16.898115 containerd[1455]: time="2025-05-15T23:35:16.898061788Z" level=info msg="Container 2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:16.898746 systemd[1]: Started cri-containerd-57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce.scope - libcontainer container 57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce. May 15 23:35:16.905182 containerd[1455]: time="2025-05-15T23:35:16.905129388Z" level=info msg="CreateContainer within sandbox \"81bd33e7facaf8ff6ec4b738cf93ef87523d6f517f7df2e3f16ce3d8a47b251e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab\"" May 15 23:35:16.905929 containerd[1455]: time="2025-05-15T23:35:16.905894428Z" level=info msg="StartContainer for \"05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab\"" May 15 23:35:16.907267 containerd[1455]: time="2025-05-15T23:35:16.907225668Z" level=info msg="connecting to shim 05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab" address="unix:///run/containerd/s/2a4fb7ec9cd5ece9f83043c7ef1b2d432aa656246126ea6d0399fd1583d8d717" protocol=ttrpc version=3 May 15 23:35:16.907475 containerd[1455]: time="2025-05-15T23:35:16.907231508Z" level=info msg="CreateContainer within sandbox \"7ccadbb33e478cf2036a8378f2a7435e5a6c7cfc152cc2ee9bde3b1d43b0574d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca\"" May 15 23:35:16.908874 containerd[1455]: time="2025-05-15T23:35:16.908269388Z" level=info msg="StartContainer for \"2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca\"" May 15 23:35:16.910391 containerd[1455]: time="2025-05-15T23:35:16.910213268Z" level=info msg="connecting to shim 2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca" address="unix:///run/containerd/s/554a30acfb8eaf77cf63104df436215cbd432ec3872196f70a43820bb679bef9" protocol=ttrpc version=3 May 15 23:35:16.931749 systemd[1]: Started cri-containerd-05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab.scope - libcontainer container 05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab. May 15 23:35:16.932849 systemd[1]: Started cri-containerd-2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca.scope - libcontainer container 2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca. May 15 23:35:16.943629 containerd[1455]: time="2025-05-15T23:35:16.943524388Z" level=info msg="StartContainer for \"57bad70d92529e0bdef151d584db8fdb2b8b63c1b780a3a7389acd89194effce\" returns successfully" May 15 23:35:16.990470 containerd[1455]: time="2025-05-15T23:35:16.990305468Z" level=info msg="StartContainer for \"2bb05ed55542c95dfe91a4667921857b8e8a80d6eda81c75b07ead34473672ca\" returns successfully" May 15 23:35:16.991503 containerd[1455]: time="2025-05-15T23:35:16.991466388Z" level=info msg="StartContainer for \"05823b1501ee7b84c748b96a719e15311574cac232c84d12971ab8c1430321ab\" returns successfully" May 15 23:35:17.061891 kubelet[2172]: I0515 23:35:17.061743 2172 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:35:17.062150 kubelet[2172]: E0515 23:35:17.062095 2172 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" May 15 23:35:17.330981 kubelet[2172]: E0515 23:35:17.330873 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:17.331274 kubelet[2172]: E0515 23:35:17.331023 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:17.334586 kubelet[2172]: E0515 23:35:17.334527 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:17.334747 kubelet[2172]: E0515 23:35:17.334726 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:17.345013 kubelet[2172]: E0515 23:35:17.344978 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:17.345161 kubelet[2172]: E0515 23:35:17.345141 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:17.864323 kubelet[2172]: I0515 23:35:17.864282 2172 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:35:18.339948 kubelet[2172]: E0515 23:35:18.339907 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:18.340282 kubelet[2172]: E0515 23:35:18.340054 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:18.343415 kubelet[2172]: E0515 23:35:18.343386 2172 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:35:18.343554 kubelet[2172]: E0515 23:35:18.343525 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:19.029669 kubelet[2172]: E0515 23:35:19.029628 2172 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 23:35:19.123561 kubelet[2172]: I0515 23:35:19.123508 2172 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:35:19.123561 kubelet[2172]: E0515 23:35:19.123566 2172 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:35:19.128602 kubelet[2172]: I0515 23:35:19.128572 2172 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:35:19.138118 kubelet[2172]: E0515 23:35:19.138077 2172 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 23:35:19.138118 kubelet[2172]: I0515 23:35:19.138111 2172 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:35:19.140678 kubelet[2172]: E0515 23:35:19.140634 2172 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 23:35:19.140678 kubelet[2172]: I0515 23:35:19.140665 2172 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:35:19.143574 kubelet[2172]: E0515 23:35:19.143525 2172 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 23:35:19.208875 kubelet[2172]: I0515 23:35:19.208609 2172 apiserver.go:52] "Watching apiserver" May 15 23:35:19.227629 kubelet[2172]: I0515 23:35:19.227593 2172 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:35:19.340187 kubelet[2172]: I0515 23:35:19.340085 2172 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:35:19.342751 kubelet[2172]: E0515 23:35:19.342711 2172 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 23:35:19.342910 kubelet[2172]: E0515 23:35:19.342887 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:20.885629 systemd[1]: Reload requested from client PID 2461 ('systemctl') (unit session-7.scope)... May 15 23:35:20.885646 systemd[1]: Reloading... May 15 23:35:20.959608 zram_generator::config[2505]: No configuration found. May 15 23:35:21.041127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:35:21.125215 systemd[1]: Reloading finished in 239 ms. May 15 23:35:21.151497 kubelet[2172]: I0515 23:35:21.151451 2172 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:35:21.151758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:35:21.177852 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:35:21.179597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:21.179679 systemd[1]: kubelet.service: Consumed 2.233s CPU time, 130.8M memory peak. May 15 23:35:21.181637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:35:21.331051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:35:21.334981 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:35:21.373150 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:35:21.373150 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:35:21.373150 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:35:21.373150 kubelet[2547]: I0515 23:35:21.372998 2547 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:35:21.380167 kubelet[2547]: I0515 23:35:21.380120 2547 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 15 23:35:21.380298 kubelet[2547]: I0515 23:35:21.380154 2547 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:35:21.380558 kubelet[2547]: I0515 23:35:21.380496 2547 server.go:956] "Client rotation is on, will bootstrap in background" May 15 23:35:21.382184 kubelet[2547]: I0515 23:35:21.382160 2547 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 15 23:35:21.385308 kubelet[2547]: I0515 23:35:21.385263 2547 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:35:21.391817 kubelet[2547]: I0515 23:35:21.391778 2547 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 23:35:21.394461 kubelet[2547]: I0515 23:35:21.394424 2547 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:35:21.394773 kubelet[2547]: I0515 23:35:21.394735 2547 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:35:21.394918 kubelet[2547]: I0515 23:35:21.394764 2547 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:35:21.395017 kubelet[2547]: I0515 23:35:21.394928 2547 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:35:21.395017 kubelet[2547]: I0515 23:35:21.394937 2547 container_manager_linux.go:303] "Creating device plugin manager" May 15 23:35:21.395017 kubelet[2547]: I0515 23:35:21.394983 2547 state_mem.go:36] "Initialized new in-memory state store" May 15 23:35:21.395133 kubelet[2547]: I0515 23:35:21.395118 2547 kubelet.go:480] "Attempting to sync node with API server" May 15 23:35:21.395170 kubelet[2547]: I0515 23:35:21.395135 2547 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:35:21.395170 kubelet[2547]: I0515 23:35:21.395160 2547 kubelet.go:386] "Adding apiserver pod source" May 15 23:35:21.395170 kubelet[2547]: I0515 23:35:21.395169 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:35:21.399329 kubelet[2547]: I0515 23:35:21.398645 2547 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 23:35:21.399329 kubelet[2547]: I0515 23:35:21.399218 2547 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 15 23:35:21.408542 kubelet[2547]: I0515 23:35:21.407524 2547 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:35:21.410197 kubelet[2547]: I0515 23:35:21.410163 2547 server.go:1289] "Started kubelet" May 15 23:35:21.412940 kubelet[2547]: I0515 23:35:21.412104 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:35:21.413839 kubelet[2547]: I0515 23:35:21.413805 2547 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:35:21.414920 kubelet[2547]: I0515 23:35:21.414886 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:35:21.415797 kubelet[2547]: I0515 23:35:21.415404 2547 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:35:21.416616 kubelet[2547]: E0515 23:35:21.416582 2547 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:35:21.417316 kubelet[2547]: I0515 23:35:21.417277 2547 server.go:317] "Adding debug handlers to kubelet server" May 15 23:35:21.418081 kubelet[2547]: I0515 23:35:21.418056 2547 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:35:21.418884 kubelet[2547]: I0515 23:35:21.418364 2547 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:35:21.419663 kubelet[2547]: I0515 23:35:21.419243 2547 factory.go:223] Registration of the systemd container factory successfully May 15 23:35:21.419663 kubelet[2547]: I0515 23:35:21.419441 2547 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:35:21.420056 kubelet[2547]: I0515 23:35:21.419942 2547 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:35:21.420214 kubelet[2547]: I0515 23:35:21.420189 2547 reconciler.go:26] "Reconciler: start to sync state" May 15 23:35:21.424096 kubelet[2547]: I0515 23:35:21.423929 2547 factory.go:223] Registration of the containerd container factory successfully May 15 23:35:21.431291 kubelet[2547]: I0515 23:35:21.431120 2547 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 15 23:35:21.432920 kubelet[2547]: I0515 23:35:21.432884 2547 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 15 23:35:21.432920 kubelet[2547]: I0515 23:35:21.432911 2547 status_manager.go:230] "Starting to sync pod status with apiserver" May 15 23:35:21.433047 kubelet[2547]: I0515 23:35:21.432931 2547 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:35:21.433047 kubelet[2547]: I0515 23:35:21.432938 2547 kubelet.go:2436] "Starting kubelet main sync loop" May 15 23:35:21.433047 kubelet[2547]: E0515 23:35:21.432981 2547 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:35:21.458926 kubelet[2547]: I0515 23:35:21.458897 2547 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:35:21.458926 kubelet[2547]: I0515 23:35:21.458919 2547 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:35:21.458926 kubelet[2547]: I0515 23:35:21.458939 2547 state_mem.go:36] "Initialized new in-memory state store" May 15 23:35:21.459094 kubelet[2547]: I0515 23:35:21.459068 2547 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:35:21.459094 kubelet[2547]: I0515 23:35:21.459078 2547 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:35:21.459094 kubelet[2547]: I0515 23:35:21.459095 2547 policy_none.go:49] "None policy: Start" May 15 23:35:21.459168 kubelet[2547]: I0515 23:35:21.459103 2547 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:35:21.459168 kubelet[2547]: I0515 23:35:21.459112 2547 state_mem.go:35] "Initializing new in-memory state store" May 15 23:35:21.459209 kubelet[2547]: I0515 23:35:21.459191 2547 state_mem.go:75] "Updated machine memory state" May 15 23:35:21.462855 kubelet[2547]: E0515 23:35:21.462680 2547 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 15 23:35:21.462939 kubelet[2547]: I0515 23:35:21.462929 2547 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:35:21.462979 kubelet[2547]: I0515 23:35:21.462942 2547 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:35:21.463150 kubelet[2547]: I0515 23:35:21.463119 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:35:21.464653 kubelet[2547]: E0515 23:35:21.464347 2547 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:35:21.534439 kubelet[2547]: I0515 23:35:21.534386 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:35:21.534580 kubelet[2547]: I0515 23:35:21.534524 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:35:21.534580 kubelet[2547]: I0515 23:35:21.534571 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:35:21.568265 kubelet[2547]: I0515 23:35:21.568233 2547 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:35:21.575735 kubelet[2547]: I0515 23:35:21.575689 2547 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 15 23:35:21.575866 kubelet[2547]: I0515 23:35:21.575826 2547 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:35:21.622828 kubelet[2547]: I0515 23:35:21.622753 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46e01f3bc27bafc48103e460be5d7568-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46e01f3bc27bafc48103e460be5d7568\") " pod="kube-system/kube-apiserver-localhost" May 15 23:35:21.622828 kubelet[2547]: I0515 23:35:21.622797 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46e01f3bc27bafc48103e460be5d7568-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46e01f3bc27bafc48103e460be5d7568\") " pod="kube-system/kube-apiserver-localhost" May 15 23:35:21.622828 kubelet[2547]: I0515 23:35:21.622819 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:21.622828 kubelet[2547]: I0515 23:35:21.622835 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:21.623059 kubelet[2547]: I0515 23:35:21.622852 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:21.623059 kubelet[2547]: I0515 23:35:21.622866 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:21.623059 kubelet[2547]: I0515 23:35:21.622881 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:35:21.623059 kubelet[2547]: I0515 23:35:21.622895 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 15 23:35:21.623059 kubelet[2547]: I0515 23:35:21.622912 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46e01f3bc27bafc48103e460be5d7568-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46e01f3bc27bafc48103e460be5d7568\") " pod="kube-system/kube-apiserver-localhost" May 15 23:35:21.840349 kubelet[2547]: E0515 23:35:21.840225 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:21.841286 kubelet[2547]: E0515 23:35:21.841247 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:21.841461 kubelet[2547]: E0515 23:35:21.841442 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:21.889859 sudo[2586]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:35:21.890138 sudo[2586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:35:22.344096 sudo[2586]: pam_unix(sudo:session): session closed for user root May 15 23:35:22.396728 kubelet[2547]: I0515 23:35:22.396687 2547 apiserver.go:52] "Watching apiserver" May 15 23:35:22.421169 kubelet[2547]: I0515 23:35:22.421100 2547 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:35:22.443800 kubelet[2547]: E0515 23:35:22.443755 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:22.444356 kubelet[2547]: I0515 23:35:22.444326 2547 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:35:22.445870 kubelet[2547]: E0515 23:35:22.445702 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:22.450409 kubelet[2547]: E0515 23:35:22.450374 2547 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:35:22.451248 kubelet[2547]: E0515 23:35:22.450569 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:22.480070 kubelet[2547]: I0515 23:35:22.479699 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.479680093 podStartE2EDuration="1.479680093s" podCreationTimestamp="2025-05-15 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:35:22.470712707 +0000 UTC m=+1.132326400" watchObservedRunningTime="2025-05-15 23:35:22.479680093 +0000 UTC m=+1.141293746" May 15 23:35:22.487656 kubelet[2547]: I0515 23:35:22.487497 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4874767549999999 podStartE2EDuration="1.487476755s" podCreationTimestamp="2025-05-15 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:35:22.479895453 +0000 UTC m=+1.141509106" watchObservedRunningTime="2025-05-15 23:35:22.487476755 +0000 UTC m=+1.149090408" May 15 23:35:22.499453 kubelet[2547]: I0515 23:35:22.499342 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.499324829 podStartE2EDuration="1.499324829s" podCreationTimestamp="2025-05-15 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:35:22.487717516 +0000 UTC m=+1.149331169" watchObservedRunningTime="2025-05-15 23:35:22.499324829 +0000 UTC m=+1.160938562" May 15 23:35:23.444895 kubelet[2547]: E0515 23:35:23.444855 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:23.445590 kubelet[2547]: E0515 23:35:23.445496 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:24.438907 sudo[1655]: pam_unix(sudo:session): session closed for user root May 15 23:35:24.440227 sshd[1654]: Connection closed by 10.0.0.1 port 34824 May 15 23:35:24.440717 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 15 23:35:24.443522 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:34824.service: Deactivated successfully. May 15 23:35:24.447320 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:35:24.447648 kubelet[2547]: E0515 23:35:24.447604 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:24.447891 systemd[1]: session-7.scope: Consumed 8.932s CPU time, 258.1M memory peak. May 15 23:35:24.451152 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. May 15 23:35:24.453563 systemd-logind[1436]: Removed session 7. May 15 23:35:25.879738 kubelet[2547]: E0515 23:35:25.879705 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:25.883671 kubelet[2547]: I0515 23:35:25.883586 2547 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:35:25.884194 containerd[1455]: time="2025-05-15T23:35:25.883891593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:35:25.884658 kubelet[2547]: I0515 23:35:25.884133 2547 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:35:26.539197 kubelet[2547]: E0515 23:35:26.539145 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.156155 systemd[1]: Created slice kubepods-besteffort-podbb87e707_3e15_496a_ab8b_5855c2b0d0fe.slice - libcontainer container kubepods-besteffort-podbb87e707_3e15_496a_ab8b_5855c2b0d0fe.slice. May 15 23:35:27.166718 kubelet[2547]: I0515 23:35:27.166667 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-config-path\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.166718 kubelet[2547]: I0515 23:35:27.166712 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-net\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167777 kubelet[2547]: I0515 23:35:27.166730 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dsfq\" (UniqueName: \"kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-kube-api-access-7dsfq\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167777 kubelet[2547]: I0515 23:35:27.166747 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knq8\" (UniqueName: \"kubernetes.io/projected/bb87e707-3e15-496a-ab8b-5855c2b0d0fe-kube-api-access-5knq8\") pod \"kube-proxy-dldr9\" (UID: \"bb87e707-3e15-496a-ab8b-5855c2b0d0fe\") " pod="kube-system/kube-proxy-dldr9" May 15 23:35:27.167777 kubelet[2547]: I0515 23:35:27.166770 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-bpf-maps\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167777 kubelet[2547]: I0515 23:35:27.166784 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-cgroup\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167777 kubelet[2547]: I0515 23:35:27.166799 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb87e707-3e15-496a-ab8b-5855c2b0d0fe-lib-modules\") pod \"kube-proxy-dldr9\" (UID: \"bb87e707-3e15-496a-ab8b-5855c2b0d0fe\") " pod="kube-system/kube-proxy-dldr9" May 15 23:35:27.167777 kubelet[2547]: I0515 23:35:27.166811 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cni-path\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167953 kubelet[2547]: I0515 23:35:27.166825 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-etc-cni-netd\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167953 kubelet[2547]: I0515 23:35:27.166846 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-xtables-lock\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167953 kubelet[2547]: I0515 23:35:27.166860 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-clustermesh-secrets\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167953 kubelet[2547]: I0515 23:35:27.166874 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-kernel\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.167953 kubelet[2547]: I0515 23:35:27.166900 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb87e707-3e15-496a-ab8b-5855c2b0d0fe-xtables-lock\") pod \"kube-proxy-dldr9\" (UID: \"bb87e707-3e15-496a-ab8b-5855c2b0d0fe\") " pod="kube-system/kube-proxy-dldr9" May 15 23:35:27.167953 kubelet[2547]: I0515 23:35:27.166915 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hubble-tls\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.168081 kubelet[2547]: I0515 23:35:27.166931 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb87e707-3e15-496a-ab8b-5855c2b0d0fe-kube-proxy\") pod \"kube-proxy-dldr9\" (UID: \"bb87e707-3e15-496a-ab8b-5855c2b0d0fe\") " pod="kube-system/kube-proxy-dldr9" May 15 23:35:27.168081 kubelet[2547]: I0515 23:35:27.166947 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-run\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.168081 kubelet[2547]: I0515 23:35:27.166961 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hostproc\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.168081 kubelet[2547]: I0515 23:35:27.166976 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-lib-modules\") pod \"cilium-pwckf\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " pod="kube-system/cilium-pwckf" May 15 23:35:27.172013 systemd[1]: Created slice kubepods-burstable-pod2e75bc48_994f_4aba_8ec9_8c4fb25c7b06.slice - libcontainer container kubepods-burstable-pod2e75bc48_994f_4aba_8ec9_8c4fb25c7b06.slice. May 15 23:35:27.181827 systemd[1]: Created slice kubepods-besteffort-pod0f32535c_9242_4145_871e_d77c62b4288c.slice - libcontainer container kubepods-besteffort-pod0f32535c_9242_4145_871e_d77c62b4288c.slice. May 15 23:35:27.267944 kubelet[2547]: I0515 23:35:27.267894 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfnz\" (UniqueName: \"kubernetes.io/projected/0f32535c-9242-4145-871e-d77c62b4288c-kube-api-access-zgfnz\") pod \"cilium-operator-6c4d7847fc-nk62c\" (UID: \"0f32535c-9242-4145-871e-d77c62b4288c\") " pod="kube-system/cilium-operator-6c4d7847fc-nk62c" May 15 23:35:27.267944 kubelet[2547]: I0515 23:35:27.267936 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f32535c-9242-4145-871e-d77c62b4288c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nk62c\" (UID: \"0f32535c-9242-4145-871e-d77c62b4288c\") " pod="kube-system/cilium-operator-6c4d7847fc-nk62c" May 15 23:35:27.452615 kubelet[2547]: E0515 23:35:27.452404 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.472160 kubelet[2547]: E0515 23:35:27.472125 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.472877 containerd[1455]: time="2025-05-15T23:35:27.472814973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dldr9,Uid:bb87e707-3e15-496a-ab8b-5855c2b0d0fe,Namespace:kube-system,Attempt:0,}" May 15 23:35:27.479217 kubelet[2547]: E0515 23:35:27.479182 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.480274 containerd[1455]: time="2025-05-15T23:35:27.480124708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwckf,Uid:2e75bc48-994f-4aba-8ec9-8c4fb25c7b06,Namespace:kube-system,Attempt:0,}" May 15 23:35:27.487406 kubelet[2547]: E0515 23:35:27.487370 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.488243 containerd[1455]: time="2025-05-15T23:35:27.487851604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nk62c,Uid:0f32535c-9242-4145-871e-d77c62b4288c,Namespace:kube-system,Attempt:0,}" May 15 23:35:27.509767 containerd[1455]: time="2025-05-15T23:35:27.509724210Z" level=info msg="connecting to shim 043f00d5f0ed26e24138744e9f764e3ee3b234131025bf238d8a037b2e047e48" address="unix:///run/containerd/s/be1e90e23340e1105d34f6a2b505ce5d5e1d687bc4d73d56b92b4dc3d3431878" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:27.530118 containerd[1455]: time="2025-05-15T23:35:27.530065173Z" level=info msg="connecting to shim 21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b" address="unix:///run/containerd/s/a52047d7a4def5c81ef69be9667e0c804e186fd12ce67f61a3b7263b1211c12d" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:27.532796 systemd[1]: Started cri-containerd-043f00d5f0ed26e24138744e9f764e3ee3b234131025bf238d8a037b2e047e48.scope - libcontainer container 043f00d5f0ed26e24138744e9f764e3ee3b234131025bf238d8a037b2e047e48. May 15 23:35:27.543432 containerd[1455]: time="2025-05-15T23:35:27.543130680Z" level=info msg="connecting to shim d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92" address="unix:///run/containerd/s/80b60ec38f7042701c5d66fec28e65b77d46006f7ca5afc01d3c2f77f6419e6e" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:27.552750 systemd[1]: Started cri-containerd-21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b.scope - libcontainer container 21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b. May 15 23:35:27.570807 systemd[1]: Started cri-containerd-d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92.scope - libcontainer container d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92. May 15 23:35:27.588282 containerd[1455]: time="2025-05-15T23:35:27.588237375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dldr9,Uid:bb87e707-3e15-496a-ab8b-5855c2b0d0fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"043f00d5f0ed26e24138744e9f764e3ee3b234131025bf238d8a037b2e047e48\"" May 15 23:35:27.589283 kubelet[2547]: E0515 23:35:27.589251 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.593584 containerd[1455]: time="2025-05-15T23:35:27.593544826Z" level=info msg="CreateContainer within sandbox \"043f00d5f0ed26e24138744e9f764e3ee3b234131025bf238d8a037b2e047e48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:35:27.605338 containerd[1455]: time="2025-05-15T23:35:27.605291090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwckf,Uid:2e75bc48-994f-4aba-8ec9-8c4fb25c7b06,Namespace:kube-system,Attempt:0,} returns sandbox id \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\"" May 15 23:35:27.606029 kubelet[2547]: E0515 23:35:27.606004 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.607398 containerd[1455]: time="2025-05-15T23:35:27.607360575Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:35:27.620328 containerd[1455]: time="2025-05-15T23:35:27.620264762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nk62c,Uid:0f32535c-9242-4145-871e-d77c62b4288c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\"" May 15 23:35:27.621097 kubelet[2547]: E0515 23:35:27.621047 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:27.622616 containerd[1455]: time="2025-05-15T23:35:27.622298886Z" level=info msg="Container 250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:27.651118 containerd[1455]: time="2025-05-15T23:35:27.651033866Z" level=info msg="CreateContainer within sandbox \"043f00d5f0ed26e24138744e9f764e3ee3b234131025bf238d8a037b2e047e48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b\"" May 15 23:35:27.651954 containerd[1455]: time="2025-05-15T23:35:27.651865348Z" level=info msg="StartContainer for \"250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b\"" May 15 23:35:27.653362 containerd[1455]: time="2025-05-15T23:35:27.653327391Z" level=info msg="connecting to shim 250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b" address="unix:///run/containerd/s/be1e90e23340e1105d34f6a2b505ce5d5e1d687bc4d73d56b92b4dc3d3431878" protocol=ttrpc version=3 May 15 23:35:27.677758 systemd[1]: Started cri-containerd-250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b.scope - libcontainer container 250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b. May 15 23:35:27.713228 containerd[1455]: time="2025-05-15T23:35:27.713047676Z" level=info msg="StartContainer for \"250f90ac67d3ddf64046d17611de3edab77bbb2a9e2c1c92e91da4937266013b\" returns successfully" May 15 23:35:28.458434 kubelet[2547]: E0515 23:35:28.458355 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:28.459248 kubelet[2547]: E0515 23:35:28.458682 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:32.574555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287542848.mount: Deactivated successfully. May 15 23:35:33.307614 kubelet[2547]: E0515 23:35:33.307038 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:33.328432 kubelet[2547]: I0515 23:35:33.328370 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dldr9" podStartSLOduration=6.328356765 podStartE2EDuration="6.328356765s" podCreationTimestamp="2025-05-15 23:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:35:28.494621288 +0000 UTC m=+7.156234941" watchObservedRunningTime="2025-05-15 23:35:33.328356765 +0000 UTC m=+11.989970418" May 15 23:35:34.580513 update_engine[1438]: I20250515 23:35:34.580443 1438 update_attempter.cc:509] Updating boot flags... May 15 23:35:34.603628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2963) May 15 23:35:35.889925 kubelet[2547]: E0515 23:35:35.889885 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:37.669618 containerd[1455]: time="2025-05-15T23:35:37.669178007Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:37.669618 containerd[1455]: time="2025-05-15T23:35:37.669546008Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 23:35:37.670598 containerd[1455]: time="2025-05-15T23:35:37.670523169Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:37.671901 containerd[1455]: time="2025-05-15T23:35:37.671774930Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.064369755s" May 15 23:35:37.671901 containerd[1455]: time="2025-05-15T23:35:37.671822410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 23:35:37.675963 containerd[1455]: time="2025-05-15T23:35:37.675837695Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:35:37.679083 containerd[1455]: time="2025-05-15T23:35:37.678739138Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:35:37.687034 containerd[1455]: time="2025-05-15T23:35:37.686255386Z" level=info msg="Container 18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:37.692372 containerd[1455]: time="2025-05-15T23:35:37.692334753Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\"" May 15 23:35:37.692984 containerd[1455]: time="2025-05-15T23:35:37.692931753Z" level=info msg="StartContainer for \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\"" May 15 23:35:37.694391 containerd[1455]: time="2025-05-15T23:35:37.694336995Z" level=info msg="connecting to shim 18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885" address="unix:///run/containerd/s/a52047d7a4def5c81ef69be9667e0c804e186fd12ce67f61a3b7263b1211c12d" protocol=ttrpc version=3 May 15 23:35:37.735801 systemd[1]: Started cri-containerd-18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885.scope - libcontainer container 18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885. May 15 23:35:37.814793 systemd[1]: cri-containerd-18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885.scope: Deactivated successfully. May 15 23:35:37.822552 containerd[1455]: time="2025-05-15T23:35:37.822189415Z" level=info msg="StartContainer for \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" returns successfully" May 15 23:35:37.844022 containerd[1455]: time="2025-05-15T23:35:37.843966359Z" level=info msg="received exit event container_id:\"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" id:\"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" pid:2991 exited_at:{seconds:1747352137 nanos:832763187}" May 15 23:35:37.844180 containerd[1455]: time="2025-05-15T23:35:37.844135039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" id:\"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" pid:2991 exited_at:{seconds:1747352137 nanos:832763187}" May 15 23:35:37.873891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885-rootfs.mount: Deactivated successfully. May 15 23:35:38.481949 kubelet[2547]: E0515 23:35:38.481918 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:38.487363 containerd[1455]: time="2025-05-15T23:35:38.487305072Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:35:38.494026 containerd[1455]: time="2025-05-15T23:35:38.493981679Z" level=info msg="Container d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:38.499335 containerd[1455]: time="2025-05-15T23:35:38.499289965Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\"" May 15 23:35:38.500856 containerd[1455]: time="2025-05-15T23:35:38.500826686Z" level=info msg="StartContainer for \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\"" May 15 23:35:38.501973 containerd[1455]: time="2025-05-15T23:35:38.501944247Z" level=info msg="connecting to shim d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0" address="unix:///run/containerd/s/a52047d7a4def5c81ef69be9667e0c804e186fd12ce67f61a3b7263b1211c12d" protocol=ttrpc version=3 May 15 23:35:38.520707 systemd[1]: Started cri-containerd-d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0.scope - libcontainer container d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0. May 15 23:35:38.544380 containerd[1455]: time="2025-05-15T23:35:38.544343531Z" level=info msg="StartContainer for \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" returns successfully" May 15 23:35:38.562528 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:35:38.562760 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:35:38.562938 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:35:38.564712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:35:38.566711 systemd[1]: cri-containerd-d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0.scope: Deactivated successfully. May 15 23:35:38.567286 containerd[1455]: time="2025-05-15T23:35:38.567256675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" id:\"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" pid:3037 exited_at:{seconds:1747352138 nanos:566944194}" May 15 23:35:38.567703 containerd[1455]: time="2025-05-15T23:35:38.567670155Z" level=info msg="received exit event container_id:\"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" id:\"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" pid:3037 exited_at:{seconds:1747352138 nanos:566944194}" May 15 23:35:38.594485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:35:39.485838 kubelet[2547]: E0515 23:35:39.485804 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:39.490666 containerd[1455]: time="2025-05-15T23:35:39.490619313Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:35:39.501100 containerd[1455]: time="2025-05-15T23:35:39.500495083Z" level=info msg="Container 357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:39.512640 containerd[1455]: time="2025-05-15T23:35:39.512597695Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\"" May 15 23:35:39.513187 containerd[1455]: time="2025-05-15T23:35:39.513134135Z" level=info msg="StartContainer for \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\"" May 15 23:35:39.514884 containerd[1455]: time="2025-05-15T23:35:39.514854137Z" level=info msg="connecting to shim 357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd" address="unix:///run/containerd/s/a52047d7a4def5c81ef69be9667e0c804e186fd12ce67f61a3b7263b1211c12d" protocol=ttrpc version=3 May 15 23:35:39.540749 systemd[1]: Started cri-containerd-357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd.scope - libcontainer container 357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd. May 15 23:35:39.579771 containerd[1455]: time="2025-05-15T23:35:39.579182239Z" level=info msg="StartContainer for \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" returns successfully" May 15 23:35:39.587880 systemd[1]: cri-containerd-357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd.scope: Deactivated successfully. May 15 23:35:39.590015 containerd[1455]: time="2025-05-15T23:35:39.589887329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" id:\"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" pid:3084 exited_at:{seconds:1747352139 nanos:589004168}" May 15 23:35:39.590229 containerd[1455]: time="2025-05-15T23:35:39.589855489Z" level=info msg="received exit event container_id:\"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" id:\"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" pid:3084 exited_at:{seconds:1747352139 nanos:589004168}" May 15 23:35:39.609270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd-rootfs.mount: Deactivated successfully. May 15 23:35:39.842252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650644021.mount: Deactivated successfully. May 15 23:35:40.491178 kubelet[2547]: E0515 23:35:40.491149 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:40.498596 containerd[1455]: time="2025-05-15T23:35:40.498090936Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:35:40.514774 containerd[1455]: time="2025-05-15T23:35:40.512997789Z" level=info msg="Container a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:40.522076 containerd[1455]: time="2025-05-15T23:35:40.521348557Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\"" May 15 23:35:40.523681 containerd[1455]: time="2025-05-15T23:35:40.523642399Z" level=info msg="StartContainer for \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\"" May 15 23:35:40.525314 containerd[1455]: time="2025-05-15T23:35:40.525156760Z" level=info msg="connecting to shim a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5" address="unix:///run/containerd/s/a52047d7a4def5c81ef69be9667e0c804e186fd12ce67f61a3b7263b1211c12d" protocol=ttrpc version=3 May 15 23:35:40.548699 systemd[1]: Started cri-containerd-a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5.scope - libcontainer container a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5. May 15 23:35:40.577433 systemd[1]: cri-containerd-a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5.scope: Deactivated successfully. May 15 23:35:40.580459 containerd[1455]: time="2025-05-15T23:35:40.580410490Z" level=info msg="received exit event container_id:\"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" id:\"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" pid:3137 exited_at:{seconds:1747352140 nanos:580017090}" May 15 23:35:40.580607 containerd[1455]: time="2025-05-15T23:35:40.580487290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" id:\"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" pid:3137 exited_at:{seconds:1747352140 nanos:580017090}" May 15 23:35:40.580925 containerd[1455]: time="2025-05-15T23:35:40.580902331Z" level=info msg="StartContainer for \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" returns successfully" May 15 23:35:40.690263 containerd[1455]: time="2025-05-15T23:35:40.689312669Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:40.690263 containerd[1455]: time="2025-05-15T23:35:40.690177429Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 23:35:40.690731 containerd[1455]: time="2025-05-15T23:35:40.690701230Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:35:40.692557 containerd[1455]: time="2025-05-15T23:35:40.692185351Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.016313096s" May 15 23:35:40.692557 containerd[1455]: time="2025-05-15T23:35:40.692224671Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 23:35:40.696385 containerd[1455]: time="2025-05-15T23:35:40.696348875Z" level=info msg="CreateContainer within sandbox \"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:35:40.701792 containerd[1455]: time="2025-05-15T23:35:40.701752800Z" level=info msg="Container 479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:40.715719 containerd[1455]: time="2025-05-15T23:35:40.715670293Z" level=info msg="CreateContainer within sandbox \"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\"" May 15 23:35:40.716154 containerd[1455]: time="2025-05-15T23:35:40.716131693Z" level=info msg="StartContainer for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\"" May 15 23:35:40.717139 containerd[1455]: time="2025-05-15T23:35:40.716964294Z" level=info msg="connecting to shim 479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1" address="unix:///run/containerd/s/80b60ec38f7042701c5d66fec28e65b77d46006f7ca5afc01d3c2f77f6419e6e" protocol=ttrpc version=3 May 15 23:35:40.734728 systemd[1]: Started cri-containerd-479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1.scope - libcontainer container 479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1. May 15 23:35:40.777038 containerd[1455]: time="2025-05-15T23:35:40.776781348Z" level=info msg="StartContainer for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" returns successfully" May 15 23:35:41.501361 kubelet[2547]: E0515 23:35:41.501326 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:41.511076 kubelet[2547]: E0515 23:35:41.511055 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:41.520852 containerd[1455]: time="2025-05-15T23:35:41.520792671Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:35:41.541596 kubelet[2547]: I0515 23:35:41.540575 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nk62c" podStartSLOduration=1.469525181 podStartE2EDuration="14.540522408s" podCreationTimestamp="2025-05-15 23:35:27 +0000 UTC" firstStartedPulling="2025-05-15 23:35:27.621955085 +0000 UTC m=+6.283568738" lastFinishedPulling="2025-05-15 23:35:40.692952312 +0000 UTC m=+19.354565965" observedRunningTime="2025-05-15 23:35:41.516676068 +0000 UTC m=+20.178289721" watchObservedRunningTime="2025-05-15 23:35:41.540522408 +0000 UTC m=+20.202136021" May 15 23:35:41.556618 containerd[1455]: time="2025-05-15T23:35:41.556512342Z" level=info msg="Container 66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:41.569632 containerd[1455]: time="2025-05-15T23:35:41.569589913Z" level=info msg="CreateContainer within sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\"" May 15 23:35:41.570489 containerd[1455]: time="2025-05-15T23:35:41.570457194Z" level=info msg="StartContainer for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\"" May 15 23:35:41.571584 containerd[1455]: time="2025-05-15T23:35:41.571458874Z" level=info msg="connecting to shim 66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca" address="unix:///run/containerd/s/a52047d7a4def5c81ef69be9667e0c804e186fd12ce67f61a3b7263b1211c12d" protocol=ttrpc version=3 May 15 23:35:41.610731 systemd[1]: Started cri-containerd-66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca.scope - libcontainer container 66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca. May 15 23:35:41.644030 containerd[1455]: time="2025-05-15T23:35:41.643993816Z" level=info msg="StartContainer for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" returns successfully" May 15 23:35:41.760801 containerd[1455]: time="2025-05-15T23:35:41.760462715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" id:\"35e614a02bd0b4a3660c84ac5a0fa300bdf5415b69f09947dcb93276b06a7c5d\" pid:3242 exited_at:{seconds:1747352141 nanos:760109194}" May 15 23:35:41.780060 kubelet[2547]: I0515 23:35:41.779683 2547 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 15 23:35:41.832613 systemd[1]: Created slice kubepods-burstable-pod887f18e5_00ad_4470_b794_5b65c35a6cb1.slice - libcontainer container kubepods-burstable-pod887f18e5_00ad_4470_b794_5b65c35a6cb1.slice. May 15 23:35:41.838783 systemd[1]: Created slice kubepods-burstable-podeaf805bc_e189_4cce_afc7_584833d915f8.slice - libcontainer container kubepods-burstable-podeaf805bc_e189_4cce_afc7_584833d915f8.slice. May 15 23:35:41.868404 kubelet[2547]: I0515 23:35:41.868300 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54rj2\" (UniqueName: \"kubernetes.io/projected/887f18e5-00ad-4470-b794-5b65c35a6cb1-kube-api-access-54rj2\") pod \"coredns-674b8bbfcf-r5b8t\" (UID: \"887f18e5-00ad-4470-b794-5b65c35a6cb1\") " pod="kube-system/coredns-674b8bbfcf-r5b8t" May 15 23:35:41.868404 kubelet[2547]: I0515 23:35:41.868344 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaf805bc-e189-4cce-afc7-584833d915f8-config-volume\") pod \"coredns-674b8bbfcf-q9fvs\" (UID: \"eaf805bc-e189-4cce-afc7-584833d915f8\") " pod="kube-system/coredns-674b8bbfcf-q9fvs" May 15 23:35:41.868404 kubelet[2547]: I0515 23:35:41.868368 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qptwt\" (UniqueName: \"kubernetes.io/projected/eaf805bc-e189-4cce-afc7-584833d915f8-kube-api-access-qptwt\") pod \"coredns-674b8bbfcf-q9fvs\" (UID: \"eaf805bc-e189-4cce-afc7-584833d915f8\") " pod="kube-system/coredns-674b8bbfcf-q9fvs" May 15 23:35:41.868629 kubelet[2547]: I0515 23:35:41.868427 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887f18e5-00ad-4470-b794-5b65c35a6cb1-config-volume\") pod \"coredns-674b8bbfcf-r5b8t\" (UID: \"887f18e5-00ad-4470-b794-5b65c35a6cb1\") " pod="kube-system/coredns-674b8bbfcf-r5b8t" May 15 23:35:42.136655 kubelet[2547]: E0515 23:35:42.135996 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:42.136976 containerd[1455]: time="2025-05-15T23:35:42.136774627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5b8t,Uid:887f18e5-00ad-4470-b794-5b65c35a6cb1,Namespace:kube-system,Attempt:0,}" May 15 23:35:42.144512 kubelet[2547]: E0515 23:35:42.144443 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:42.145137 containerd[1455]: time="2025-05-15T23:35:42.145090913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9fvs,Uid:eaf805bc-e189-4cce-afc7-584833d915f8,Namespace:kube-system,Attempt:0,}" May 15 23:35:42.523519 kubelet[2547]: E0515 23:35:42.523483 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:42.525364 kubelet[2547]: E0515 23:35:42.523641 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:42.538395 kubelet[2547]: I0515 23:35:42.538332 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pwckf" podStartSLOduration=5.469851626 podStartE2EDuration="15.538317426s" podCreationTimestamp="2025-05-15 23:35:27 +0000 UTC" firstStartedPulling="2025-05-15 23:35:27.607083254 +0000 UTC m=+6.268696907" lastFinishedPulling="2025-05-15 23:35:37.675549094 +0000 UTC m=+16.337162707" observedRunningTime="2025-05-15 23:35:42.537746466 +0000 UTC m=+21.199360119" watchObservedRunningTime="2025-05-15 23:35:42.538317426 +0000 UTC m=+21.199931079" May 15 23:35:43.527618 kubelet[2547]: E0515 23:35:43.525702 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:44.527238 kubelet[2547]: E0515 23:35:44.527201 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:44.649433 systemd-networkd[1396]: cilium_host: Link UP May 15 23:35:44.649575 systemd-networkd[1396]: cilium_net: Link UP May 15 23:35:44.650259 systemd-networkd[1396]: cilium_net: Gained carrier May 15 23:35:44.650766 systemd-networkd[1396]: cilium_host: Gained carrier May 15 23:35:44.651125 systemd-networkd[1396]: cilium_net: Gained IPv6LL May 15 23:35:44.651278 systemd-networkd[1396]: cilium_host: Gained IPv6LL May 15 23:35:44.739727 systemd-networkd[1396]: cilium_vxlan: Link UP May 15 23:35:44.739734 systemd-networkd[1396]: cilium_vxlan: Gained carrier May 15 23:35:45.099593 kernel: NET: Registered PF_ALG protocol family May 15 23:35:45.668107 systemd-networkd[1396]: lxc_health: Link UP May 15 23:35:45.678476 systemd-networkd[1396]: lxc_health: Gained carrier May 15 23:35:46.225079 systemd-networkd[1396]: lxca49f496069e6: Link UP May 15 23:35:46.246573 kernel: eth0: renamed from tmp8104e May 15 23:35:46.262140 systemd-networkd[1396]: lxc540a3eb13767: Link UP May 15 23:35:46.263587 kernel: eth0: renamed from tmpf3665 May 15 23:35:46.276996 systemd-networkd[1396]: lxca49f496069e6: Gained carrier May 15 23:35:46.277883 systemd-networkd[1396]: lxc540a3eb13767: Gained carrier May 15 23:35:46.692725 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL May 15 23:35:47.484495 kubelet[2547]: E0515 23:35:47.484079 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:47.524653 systemd-networkd[1396]: lxc_health: Gained IPv6LL May 15 23:35:47.780779 systemd-networkd[1396]: lxca49f496069e6: Gained IPv6LL May 15 23:35:47.908658 systemd-networkd[1396]: lxc540a3eb13767: Gained IPv6LL May 15 23:35:49.108460 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:50516.service - OpenSSH per-connection server daemon (10.0.0.1:50516). May 15 23:35:49.163405 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 50516 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:35:49.165166 sshd-session[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:35:49.173339 systemd-logind[1436]: New session 8 of user core. May 15 23:35:49.183788 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:35:49.327566 sshd[3729]: Connection closed by 10.0.0.1 port 50516 May 15 23:35:49.326919 sshd-session[3727]: pam_unix(sshd:session): session closed for user core May 15 23:35:49.330323 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:50516.service: Deactivated successfully. May 15 23:35:49.332121 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:35:49.332949 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. May 15 23:35:49.333866 systemd-logind[1436]: Removed session 8. May 15 23:35:49.612705 kubelet[2547]: I0515 23:35:49.611940 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 23:35:49.622639 kubelet[2547]: E0515 23:35:49.622603 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:49.809268 containerd[1455]: time="2025-05-15T23:35:49.808721232Z" level=info msg="connecting to shim 8104e980022510ac2038a356e4ad5af6d0be617ab9ebe14b6e560c51ea4a0524" address="unix:///run/containerd/s/471bad5369342823af48228e00b90aba57d3b66f2b4e9a8c101de71e8680f5e5" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:49.814064 containerd[1455]: time="2025-05-15T23:35:49.813968155Z" level=info msg="connecting to shim f3665c8bed635db6f679ae3ac598a9d3c84ee2f01959f6d36902316a85b19265" address="unix:///run/containerd/s/1639e6a7b1ca0ea73ddf2b96d627b8b50e0d94547f58fbe6fc25f6d3308406c6" namespace=k8s.io protocol=ttrpc version=3 May 15 23:35:49.847710 systemd[1]: Started cri-containerd-8104e980022510ac2038a356e4ad5af6d0be617ab9ebe14b6e560c51ea4a0524.scope - libcontainer container 8104e980022510ac2038a356e4ad5af6d0be617ab9ebe14b6e560c51ea4a0524. May 15 23:35:49.850976 systemd[1]: Started cri-containerd-f3665c8bed635db6f679ae3ac598a9d3c84ee2f01959f6d36902316a85b19265.scope - libcontainer container f3665c8bed635db6f679ae3ac598a9d3c84ee2f01959f6d36902316a85b19265. May 15 23:35:49.859666 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:35:49.866715 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:35:49.882007 containerd[1455]: time="2025-05-15T23:35:49.881967909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9fvs,Uid:eaf805bc-e189-4cce-afc7-584833d915f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8104e980022510ac2038a356e4ad5af6d0be617ab9ebe14b6e560c51ea4a0524\"" May 15 23:35:49.882673 kubelet[2547]: E0515 23:35:49.882650 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:49.890130 containerd[1455]: time="2025-05-15T23:35:49.890079633Z" level=info msg="CreateContainer within sandbox \"8104e980022510ac2038a356e4ad5af6d0be617ab9ebe14b6e560c51ea4a0524\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:35:49.890839 containerd[1455]: time="2025-05-15T23:35:49.890805114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5b8t,Uid:887f18e5-00ad-4470-b794-5b65c35a6cb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3665c8bed635db6f679ae3ac598a9d3c84ee2f01959f6d36902316a85b19265\"" May 15 23:35:49.891695 kubelet[2547]: E0515 23:35:49.891569 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:49.896721 containerd[1455]: time="2025-05-15T23:35:49.896643956Z" level=info msg="CreateContainer within sandbox \"f3665c8bed635db6f679ae3ac598a9d3c84ee2f01959f6d36902316a85b19265\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:35:49.902934 containerd[1455]: time="2025-05-15T23:35:49.902621000Z" level=info msg="Container 22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:49.905715 containerd[1455]: time="2025-05-15T23:35:49.905633521Z" level=info msg="Container 662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308: CDI devices from CRI Config.CDIDevices: []" May 15 23:35:49.909038 containerd[1455]: time="2025-05-15T23:35:49.909000963Z" level=info msg="CreateContainer within sandbox \"8104e980022510ac2038a356e4ad5af6d0be617ab9ebe14b6e560c51ea4a0524\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869\"" May 15 23:35:49.909567 containerd[1455]: time="2025-05-15T23:35:49.909413843Z" level=info msg="StartContainer for \"22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869\"" May 15 23:35:49.910413 containerd[1455]: time="2025-05-15T23:35:49.910352163Z" level=info msg="connecting to shim 22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869" address="unix:///run/containerd/s/471bad5369342823af48228e00b90aba57d3b66f2b4e9a8c101de71e8680f5e5" protocol=ttrpc version=3 May 15 23:35:49.911710 containerd[1455]: time="2025-05-15T23:35:49.911661204Z" level=info msg="CreateContainer within sandbox \"f3665c8bed635db6f679ae3ac598a9d3c84ee2f01959f6d36902316a85b19265\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308\"" May 15 23:35:49.912698 containerd[1455]: time="2025-05-15T23:35:49.912609325Z" level=info msg="StartContainer for \"662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308\"" May 15 23:35:49.915201 containerd[1455]: time="2025-05-15T23:35:49.914850286Z" level=info msg="connecting to shim 662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308" address="unix:///run/containerd/s/1639e6a7b1ca0ea73ddf2b96d627b8b50e0d94547f58fbe6fc25f6d3308406c6" protocol=ttrpc version=3 May 15 23:35:49.930870 systemd[1]: Started cri-containerd-22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869.scope - libcontainer container 22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869. May 15 23:35:49.933233 systemd[1]: Started cri-containerd-662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308.scope - libcontainer container 662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308. May 15 23:35:49.975104 containerd[1455]: time="2025-05-15T23:35:49.973910876Z" level=info msg="StartContainer for \"662d9b33bc5946375f51d1fc577af8f1dd1222991b4566fc4ccdc1fb5737c308\" returns successfully" May 15 23:35:49.980705 containerd[1455]: time="2025-05-15T23:35:49.979895639Z" level=info msg="StartContainer for \"22adffea62d89c8a36ba2a4f9bcd00956084ce2d7e25b312f8b91a0178905869\" returns successfully" May 15 23:35:50.541066 kubelet[2547]: E0515 23:35:50.540793 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:50.544929 kubelet[2547]: E0515 23:35:50.544890 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:50.545306 kubelet[2547]: E0515 23:35:50.545282 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:50.553152 kubelet[2547]: I0515 23:35:50.553098 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r5b8t" podStartSLOduration=23.553085471 podStartE2EDuration="23.553085471s" podCreationTimestamp="2025-05-15 23:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:35:50.552795391 +0000 UTC m=+29.214409004" watchObservedRunningTime="2025-05-15 23:35:50.553085471 +0000 UTC m=+29.214699124" May 15 23:35:50.588786 kubelet[2547]: I0515 23:35:50.588731 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q9fvs" podStartSLOduration=23.588713408 podStartE2EDuration="23.588713408s" podCreationTimestamp="2025-05-15 23:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:35:50.582054085 +0000 UTC m=+29.243667738" watchObservedRunningTime="2025-05-15 23:35:50.588713408 +0000 UTC m=+29.250327061" May 15 23:35:50.790292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512167803.mount: Deactivated successfully. May 15 23:35:51.546228 kubelet[2547]: E0515 23:35:51.545790 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:51.546228 kubelet[2547]: E0515 23:35:51.546167 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:52.547830 kubelet[2547]: E0515 23:35:52.547793 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:52.548185 kubelet[2547]: E0515 23:35:52.547929 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:35:54.341219 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:37388.service - OpenSSH per-connection server daemon (10.0.0.1:37388). May 15 23:35:54.403863 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 37388 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:35:54.405609 sshd-session[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:35:54.410514 systemd-logind[1436]: New session 9 of user core. May 15 23:35:54.420718 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:35:54.541559 sshd[3919]: Connection closed by 10.0.0.1 port 37388 May 15 23:35:54.542771 sshd-session[3917]: pam_unix(sshd:session): session closed for user core May 15 23:35:54.547399 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:37388.service: Deactivated successfully. May 15 23:35:54.550882 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:35:54.552753 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. May 15 23:35:54.554099 systemd-logind[1436]: Removed session 9. May 15 23:35:59.553322 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:37400.service - OpenSSH per-connection server daemon (10.0.0.1:37400). May 15 23:35:59.632078 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 37400 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:35:59.633200 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:35:59.637170 systemd-logind[1436]: New session 10 of user core. May 15 23:35:59.647702 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:35:59.756816 sshd[3938]: Connection closed by 10.0.0.1 port 37400 May 15 23:35:59.757180 sshd-session[3936]: pam_unix(sshd:session): session closed for user core May 15 23:35:59.770359 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:37400.service: Deactivated successfully. May 15 23:35:59.772257 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:35:59.773875 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. May 15 23:35:59.775233 systemd-logind[1436]: Removed session 10. May 15 23:35:59.778034 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:37408.service - OpenSSH per-connection server daemon (10.0.0.1:37408). May 15 23:35:59.831822 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 37408 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:35:59.832892 sshd-session[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:35:59.836766 systemd-logind[1436]: New session 11 of user core. May 15 23:35:59.846685 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:35:59.993825 sshd[3955]: Connection closed by 10.0.0.1 port 37408 May 15 23:35:59.993453 sshd-session[3953]: pam_unix(sshd:session): session closed for user core May 15 23:36:00.008709 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:37408.service: Deactivated successfully. May 15 23:36:00.011407 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:36:00.013227 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. May 15 23:36:00.016953 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:37410.service - OpenSSH per-connection server daemon (10.0.0.1:37410). May 15 23:36:00.018270 systemd-logind[1436]: Removed session 11. May 15 23:36:00.069596 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 37410 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:00.070738 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:00.074948 systemd-logind[1436]: New session 12 of user core. May 15 23:36:00.084685 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:36:00.199252 sshd[3969]: Connection closed by 10.0.0.1 port 37410 May 15 23:36:00.199610 sshd-session[3966]: pam_unix(sshd:session): session closed for user core May 15 23:36:00.202876 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:37410.service: Deactivated successfully. May 15 23:36:00.205334 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:36:00.206189 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. May 15 23:36:00.206954 systemd-logind[1436]: Removed session 12. May 15 23:36:05.218464 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:57992.service - OpenSSH per-connection server daemon (10.0.0.1:57992). May 15 23:36:05.275640 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:05.276919 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:05.281969 systemd-logind[1436]: New session 13 of user core. May 15 23:36:05.289693 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:36:05.408989 sshd[3984]: Connection closed by 10.0.0.1 port 57992 May 15 23:36:05.409346 sshd-session[3982]: pam_unix(sshd:session): session closed for user core May 15 23:36:05.414286 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:57992.service: Deactivated successfully. May 15 23:36:05.417019 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:36:05.418890 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. May 15 23:36:05.419901 systemd-logind[1436]: Removed session 13. May 15 23:36:10.420850 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:58000.service - OpenSSH per-connection server daemon (10.0.0.1:58000). May 15 23:36:10.466214 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 58000 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:10.467518 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:10.471817 systemd-logind[1436]: New session 14 of user core. May 15 23:36:10.481897 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:36:10.624078 sshd[3999]: Connection closed by 10.0.0.1 port 58000 May 15 23:36:10.624434 sshd-session[3997]: pam_unix(sshd:session): session closed for user core May 15 23:36:10.638954 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:58000.service: Deactivated successfully. May 15 23:36:10.640521 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:36:10.641271 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. May 15 23:36:10.643300 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). May 15 23:36:10.644142 systemd-logind[1436]: Removed session 14. May 15 23:36:10.700334 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:10.701381 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:10.705789 systemd-logind[1436]: New session 15 of user core. May 15 23:36:10.715707 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:36:10.919028 sshd[4014]: Connection closed by 10.0.0.1 port 58016 May 15 23:36:10.919814 sshd-session[4011]: pam_unix(sshd:session): session closed for user core May 15 23:36:10.932836 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:58016.service: Deactivated successfully. May 15 23:36:10.934545 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:36:10.935855 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. May 15 23:36:10.937125 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). May 15 23:36:10.938213 systemd-logind[1436]: Removed session 15. May 15 23:36:10.993390 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:10.994762 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:10.999253 systemd-logind[1436]: New session 16 of user core. May 15 23:36:11.010735 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:36:11.784845 sshd[4027]: Connection closed by 10.0.0.1 port 58018 May 15 23:36:11.785437 sshd-session[4024]: pam_unix(sshd:session): session closed for user core May 15 23:36:11.804375 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:58018.service: Deactivated successfully. May 15 23:36:11.808804 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:36:11.811626 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. May 15 23:36:11.814690 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:58026.service - OpenSSH per-connection server daemon (10.0.0.1:58026). May 15 23:36:11.816314 systemd-logind[1436]: Removed session 16. May 15 23:36:11.885638 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 58026 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:11.887300 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:11.892116 systemd-logind[1436]: New session 17 of user core. May 15 23:36:11.902745 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:36:12.121629 sshd[4049]: Connection closed by 10.0.0.1 port 58026 May 15 23:36:12.122215 sshd-session[4046]: pam_unix(sshd:session): session closed for user core May 15 23:36:12.137278 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:58026.service: Deactivated successfully. May 15 23:36:12.139047 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:36:12.139789 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. May 15 23:36:12.141854 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:58030.service - OpenSSH per-connection server daemon (10.0.0.1:58030). May 15 23:36:12.142800 systemd-logind[1436]: Removed session 17. May 15 23:36:12.199262 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 58030 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:12.200783 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:12.204840 systemd-logind[1436]: New session 18 of user core. May 15 23:36:12.214715 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:36:12.321825 sshd[4063]: Connection closed by 10.0.0.1 port 58030 May 15 23:36:12.322341 sshd-session[4060]: pam_unix(sshd:session): session closed for user core May 15 23:36:12.325443 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:58030.service: Deactivated successfully. May 15 23:36:12.328127 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:36:12.328889 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. May 15 23:36:12.329678 systemd-logind[1436]: Removed session 18. May 15 23:36:17.333958 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:51882.service - OpenSSH per-connection server daemon (10.0.0.1:51882). May 15 23:36:17.388823 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 51882 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:17.390700 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:17.394834 systemd-logind[1436]: New session 19 of user core. May 15 23:36:17.402715 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:36:17.514007 sshd[4082]: Connection closed by 10.0.0.1 port 51882 May 15 23:36:17.514558 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 15 23:36:17.517915 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:51882.service: Deactivated successfully. May 15 23:36:17.519633 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:36:17.521099 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. May 15 23:36:17.521938 systemd-logind[1436]: Removed session 19. May 15 23:36:22.526052 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:60828.service - OpenSSH per-connection server daemon (10.0.0.1:60828). May 15 23:36:22.578916 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 60828 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:22.580218 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:22.585625 systemd-logind[1436]: New session 20 of user core. May 15 23:36:22.599733 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:36:22.729796 sshd[4100]: Connection closed by 10.0.0.1 port 60828 May 15 23:36:22.730527 sshd-session[4098]: pam_unix(sshd:session): session closed for user core May 15 23:36:22.734228 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:60828.service: Deactivated successfully. May 15 23:36:22.736287 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:36:22.737060 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. May 15 23:36:22.738223 systemd-logind[1436]: Removed session 20. May 15 23:36:27.742463 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:60844.service - OpenSSH per-connection server daemon (10.0.0.1:60844). May 15 23:36:27.795333 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 60844 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:27.796577 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:27.800745 systemd-logind[1436]: New session 21 of user core. May 15 23:36:27.806686 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:36:27.913369 sshd[4116]: Connection closed by 10.0.0.1 port 60844 May 15 23:36:27.913878 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 15 23:36:27.925108 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:60844.service: Deactivated successfully. May 15 23:36:27.927180 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:36:27.928570 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. May 15 23:36:27.929970 systemd[1]: Started sshd@21-10.0.0.93:22-10.0.0.1:60852.service - OpenSSH per-connection server daemon (10.0.0.1:60852). May 15 23:36:27.932164 systemd-logind[1436]: Removed session 21. May 15 23:36:27.979142 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 60852 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:27.980348 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:27.984066 systemd-logind[1436]: New session 22 of user core. May 15 23:36:27.999732 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 23:36:29.995271 containerd[1455]: time="2025-05-15T23:36:29.995221553Z" level=info msg="StopContainer for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" with timeout 30 (s)" May 15 23:36:30.002511 containerd[1455]: time="2025-05-15T23:36:30.002471368Z" level=info msg="Stop container \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" with signal terminated" May 15 23:36:30.013032 systemd[1]: cri-containerd-479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1.scope: Deactivated successfully. May 15 23:36:30.015363 containerd[1455]: time="2025-05-15T23:36:30.014197815Z" level=info msg="received exit event container_id:\"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" id:\"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" pid:3178 exited_at:{seconds:1747352190 nanos:13961893}" May 15 23:36:30.015363 containerd[1455]: time="2025-05-15T23:36:30.014323896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" id:\"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" pid:3178 exited_at:{seconds:1747352190 nanos:13961893}" May 15 23:36:30.027622 containerd[1455]: time="2025-05-15T23:36:30.027503354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" id:\"846ea072d73f709f941ee5bb4f7ece3e44d1bc38fd190142409c04c238ee672f\" pid:4161 exited_at:{seconds:1747352190 nanos:26993790}" May 15 23:36:30.029463 containerd[1455]: time="2025-05-15T23:36:30.029316207Z" level=info msg="StopContainer for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" with timeout 2 (s)" May 15 23:36:30.029647 containerd[1455]: time="2025-05-15T23:36:30.029607769Z" level=info msg="Stop container \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" with signal terminated" May 15 23:36:30.034217 containerd[1455]: time="2025-05-15T23:36:30.034162403Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:36:30.038403 systemd-networkd[1396]: lxc_health: Link DOWN May 15 23:36:30.038412 systemd-networkd[1396]: lxc_health: Lost carrier May 15 23:36:30.046730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1-rootfs.mount: Deactivated successfully. May 15 23:36:30.052040 systemd[1]: cri-containerd-66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca.scope: Deactivated successfully. May 15 23:36:30.052351 systemd[1]: cri-containerd-66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca.scope: Consumed 6.486s CPU time, 122.8M memory peak, 140K read from disk, 12.9M written to disk. May 15 23:36:30.053042 containerd[1455]: time="2025-05-15T23:36:30.053007062Z" level=info msg="received exit event container_id:\"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" id:\"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" pid:3211 exited_at:{seconds:1747352190 nanos:52727500}" May 15 23:36:30.053311 containerd[1455]: time="2025-05-15T23:36:30.053216384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" id:\"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" pid:3211 exited_at:{seconds:1747352190 nanos:52727500}" May 15 23:36:30.079347 containerd[1455]: time="2025-05-15T23:36:30.079274857Z" level=info msg="StopContainer for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" returns successfully" May 15 23:36:30.080103 containerd[1455]: time="2025-05-15T23:36:30.080044703Z" level=info msg="StopPodSandbox for \"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\"" May 15 23:36:30.095028 containerd[1455]: time="2025-05-15T23:36:30.094982853Z" level=info msg="Container to stop \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:36:30.101840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca-rootfs.mount: Deactivated successfully. May 15 23:36:30.104481 systemd[1]: cri-containerd-d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92.scope: Deactivated successfully. May 15 23:36:30.106311 containerd[1455]: time="2025-05-15T23:36:30.106265337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" id:\"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" pid:2743 exit_status:137 exited_at:{seconds:1747352190 nanos:105971094}" May 15 23:36:30.113393 containerd[1455]: time="2025-05-15T23:36:30.113359989Z" level=info msg="StopContainer for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" returns successfully" May 15 23:36:30.113805 containerd[1455]: time="2025-05-15T23:36:30.113780272Z" level=info msg="StopPodSandbox for \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\"" May 15 23:36:30.113870 containerd[1455]: time="2025-05-15T23:36:30.113835593Z" level=info msg="Container to stop \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:36:30.113870 containerd[1455]: time="2025-05-15T23:36:30.113848473Z" level=info msg="Container to stop \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:36:30.113870 containerd[1455]: time="2025-05-15T23:36:30.113857073Z" level=info msg="Container to stop \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:36:30.113870 containerd[1455]: time="2025-05-15T23:36:30.113864833Z" level=info msg="Container to stop \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:36:30.113979 containerd[1455]: time="2025-05-15T23:36:30.113872993Z" level=info msg="Container to stop \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:36:30.119802 systemd[1]: cri-containerd-21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b.scope: Deactivated successfully. May 15 23:36:30.134688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92-rootfs.mount: Deactivated successfully. May 15 23:36:30.140742 containerd[1455]: time="2025-05-15T23:36:30.140636751Z" level=info msg="shim disconnected" id=d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92 namespace=k8s.io May 15 23:36:30.140742 containerd[1455]: time="2025-05-15T23:36:30.140673751Z" level=warning msg="cleaning up after shim disconnected" id=d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92 namespace=k8s.io May 15 23:36:30.140742 containerd[1455]: time="2025-05-15T23:36:30.140706752Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:36:30.141048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b-rootfs.mount: Deactivated successfully. May 15 23:36:30.142616 containerd[1455]: time="2025-05-15T23:36:30.142528525Z" level=info msg="shim disconnected" id=21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b namespace=k8s.io May 15 23:36:30.142616 containerd[1455]: time="2025-05-15T23:36:30.142586766Z" level=warning msg="cleaning up after shim disconnected" id=21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b namespace=k8s.io May 15 23:36:30.142616 containerd[1455]: time="2025-05-15T23:36:30.142620766Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:36:30.166929 containerd[1455]: time="2025-05-15T23:36:30.166762545Z" level=info msg="received exit event sandbox_id:\"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" exit_status:137 exited_at:{seconds:1747352190 nanos:122067094}" May 15 23:36:30.166929 containerd[1455]: time="2025-05-15T23:36:30.166893346Z" level=info msg="TearDown network for sandbox \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" successfully" May 15 23:36:30.166929 containerd[1455]: time="2025-05-15T23:36:30.166917066Z" level=info msg="StopPodSandbox for \"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" returns successfully" May 15 23:36:30.168578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b-shm.mount: Deactivated successfully. May 15 23:36:30.174185 containerd[1455]: time="2025-05-15T23:36:30.174116719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" id:\"21ca1e053f710fee7cfac0cff443310e72ed961e055095cec14a47afa2c1031b\" pid:2731 exit_status:137 exited_at:{seconds:1747352190 nanos:122067094}" May 15 23:36:30.174301 containerd[1455]: time="2025-05-15T23:36:30.174136439Z" level=info msg="received exit event sandbox_id:\"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" exit_status:137 exited_at:{seconds:1747352190 nanos:105971094}" May 15 23:36:30.174333 containerd[1455]: time="2025-05-15T23:36:30.174282680Z" level=info msg="TearDown network for sandbox \"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" successfully" May 15 23:36:30.174333 containerd[1455]: time="2025-05-15T23:36:30.174313680Z" level=info msg="StopPodSandbox for \"d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92\" returns successfully" May 15 23:36:30.265093 kubelet[2547]: I0515 23:36:30.264617 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-etc-cni-netd\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265093 kubelet[2547]: I0515 23:36:30.264670 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-xtables-lock\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265093 kubelet[2547]: I0515 23:36:30.264690 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-lib-modules\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265093 kubelet[2547]: I0515 23:36:30.264707 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-kernel\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265093 kubelet[2547]: I0515 23:36:30.264731 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hubble-tls\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265093 kubelet[2547]: I0515 23:36:30.264745 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-run\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265630 kubelet[2547]: I0515 23:36:30.264764 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-bpf-maps\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265630 kubelet[2547]: I0515 23:36:30.264782 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-clustermesh-secrets\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265630 kubelet[2547]: I0515 23:36:30.264801 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f32535c-9242-4145-871e-d77c62b4288c-cilium-config-path\") pod \"0f32535c-9242-4145-871e-d77c62b4288c\" (UID: \"0f32535c-9242-4145-871e-d77c62b4288c\") " May 15 23:36:30.265630 kubelet[2547]: I0515 23:36:30.264820 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-config-path\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265630 kubelet[2547]: I0515 23:36:30.264835 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-net\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265630 kubelet[2547]: I0515 23:36:30.264852 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgfnz\" (UniqueName: \"kubernetes.io/projected/0f32535c-9242-4145-871e-d77c62b4288c-kube-api-access-zgfnz\") pod \"0f32535c-9242-4145-871e-d77c62b4288c\" (UID: \"0f32535c-9242-4145-871e-d77c62b4288c\") " May 15 23:36:30.265875 kubelet[2547]: I0515 23:36:30.264865 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hostproc\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265875 kubelet[2547]: I0515 23:36:30.264882 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-cgroup\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265875 kubelet[2547]: I0515 23:36:30.264901 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dsfq\" (UniqueName: \"kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-kube-api-access-7dsfq\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.265875 kubelet[2547]: I0515 23:36:30.264916 2547 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cni-path\") pod \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\" (UID: \"2e75bc48-994f-4aba-8ec9-8c4fb25c7b06\") " May 15 23:36:30.271575 kubelet[2547]: I0515 23:36:30.271027 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cni-path" (OuterVolumeSpecName: "cni-path") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.271575 kubelet[2547]: I0515 23:36:30.271031 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.271575 kubelet[2547]: I0515 23:36:30.271271 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.271575 kubelet[2547]: I0515 23:36:30.271324 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.271575 kubelet[2547]: I0515 23:36:30.271366 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.271799 kubelet[2547]: I0515 23:36:30.271439 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.276989 kubelet[2547]: I0515 23:36:30.276583 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f32535c-9242-4145-871e-d77c62b4288c-kube-api-access-zgfnz" (OuterVolumeSpecName: "kube-api-access-zgfnz") pod "0f32535c-9242-4145-871e-d77c62b4288c" (UID: "0f32535c-9242-4145-871e-d77c62b4288c"). InnerVolumeSpecName "kube-api-access-zgfnz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 23:36:30.276989 kubelet[2547]: I0515 23:36:30.276607 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.276989 kubelet[2547]: I0515 23:36:30.276693 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 23:36:30.276989 kubelet[2547]: I0515 23:36:30.276729 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.276989 kubelet[2547]: I0515 23:36:30.276750 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hostproc" (OuterVolumeSpecName: "hostproc") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.277434 kubelet[2547]: I0515 23:36:30.277405 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 23:36:30.278433 kubelet[2547]: I0515 23:36:30.278404 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 23:36:30.279042 kubelet[2547]: I0515 23:36:30.279015 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 23:36:30.279081 kubelet[2547]: I0515 23:36:30.279051 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-kube-api-access-7dsfq" (OuterVolumeSpecName: "kube-api-access-7dsfq") pod "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" (UID: "2e75bc48-994f-4aba-8ec9-8c4fb25c7b06"). InnerVolumeSpecName "kube-api-access-7dsfq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 23:36:30.282515 kubelet[2547]: I0515 23:36:30.282450 2547 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f32535c-9242-4145-871e-d77c62b4288c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f32535c-9242-4145-871e-d77c62b4288c" (UID: "0f32535c-9242-4145-871e-d77c62b4288c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 23:36:30.365101 kubelet[2547]: I0515 23:36:30.365059 2547 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365101 kubelet[2547]: I0515 23:36:30.365095 2547 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365101 kubelet[2547]: I0515 23:36:30.365106 2547 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zgfnz\" (UniqueName: \"kubernetes.io/projected/0f32535c-9242-4145-871e-d77c62b4288c-kube-api-access-zgfnz\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365117 2547 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365125 2547 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365133 2547 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7dsfq\" (UniqueName: \"kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-kube-api-access-7dsfq\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365141 2547 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365148 2547 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365155 2547 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365163 2547 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365311 kubelet[2547]: I0515 23:36:30.365170 2547 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365487 kubelet[2547]: I0515 23:36:30.365178 2547 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365487 kubelet[2547]: I0515 23:36:30.365185 2547 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365487 kubelet[2547]: I0515 23:36:30.365193 2547 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365487 kubelet[2547]: I0515 23:36:30.365201 2547 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.365487 kubelet[2547]: I0515 23:36:30.365209 2547 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f32535c-9242-4145-871e-d77c62b4288c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:36:30.628242 kubelet[2547]: I0515 23:36:30.628068 2547 scope.go:117] "RemoveContainer" containerID="479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1" May 15 23:36:30.630760 containerd[1455]: time="2025-05-15T23:36:30.630329256Z" level=info msg="RemoveContainer for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\"" May 15 23:36:30.635665 systemd[1]: Removed slice kubepods-besteffort-pod0f32535c_9242_4145_871e_d77c62b4288c.slice - libcontainer container kubepods-besteffort-pod0f32535c_9242_4145_871e_d77c62b4288c.slice. May 15 23:36:30.641559 containerd[1455]: time="2025-05-15T23:36:30.640332891Z" level=info msg="RemoveContainer for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" returns successfully" May 15 23:36:30.641559 containerd[1455]: time="2025-05-15T23:36:30.640840454Z" level=error msg="ContainerStatus for \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\": not found" May 15 23:36:30.641678 kubelet[2547]: I0515 23:36:30.640569 2547 scope.go:117] "RemoveContainer" containerID="479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1" May 15 23:36:30.642755 systemd[1]: Removed slice kubepods-burstable-pod2e75bc48_994f_4aba_8ec9_8c4fb25c7b06.slice - libcontainer container kubepods-burstable-pod2e75bc48_994f_4aba_8ec9_8c4fb25c7b06.slice. May 15 23:36:30.642853 systemd[1]: kubepods-burstable-pod2e75bc48_994f_4aba_8ec9_8c4fb25c7b06.slice: Consumed 6.615s CPU time, 123.1M memory peak, 152K read from disk, 12.9M written to disk. May 15 23:36:30.645817 kubelet[2547]: E0515 23:36:30.645774 2547 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\": not found" containerID="479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1" May 15 23:36:30.645889 kubelet[2547]: I0515 23:36:30.645826 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1"} err="failed to get container status \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"479f83805af9d35c90d6fa49fe7223af45dbb7eed4ca6c48e5181aa0b87d35d1\": not found" May 15 23:36:30.645889 kubelet[2547]: I0515 23:36:30.645859 2547 scope.go:117] "RemoveContainer" containerID="66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca" May 15 23:36:30.647651 containerd[1455]: time="2025-05-15T23:36:30.647565664Z" level=info msg="RemoveContainer for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\"" May 15 23:36:30.652687 containerd[1455]: time="2025-05-15T23:36:30.652654862Z" level=info msg="RemoveContainer for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" returns successfully" May 15 23:36:30.652847 kubelet[2547]: I0515 23:36:30.652827 2547 scope.go:117] "RemoveContainer" containerID="a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5" May 15 23:36:30.672365 containerd[1455]: time="2025-05-15T23:36:30.672324127Z" level=info msg="RemoveContainer for \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\"" May 15 23:36:30.675745 containerd[1455]: time="2025-05-15T23:36:30.675711512Z" level=info msg="RemoveContainer for \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" returns successfully" May 15 23:36:30.675888 kubelet[2547]: I0515 23:36:30.675859 2547 scope.go:117] "RemoveContainer" containerID="357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd" May 15 23:36:30.678050 containerd[1455]: time="2025-05-15T23:36:30.678023370Z" level=info msg="RemoveContainer for \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\"" May 15 23:36:30.681384 containerd[1455]: time="2025-05-15T23:36:30.681353714Z" level=info msg="RemoveContainer for \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" returns successfully" May 15 23:36:30.681519 kubelet[2547]: I0515 23:36:30.681490 2547 scope.go:117] "RemoveContainer" containerID="d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0" May 15 23:36:30.683980 containerd[1455]: time="2025-05-15T23:36:30.683949573Z" level=info msg="RemoveContainer for \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\"" May 15 23:36:30.687048 containerd[1455]: time="2025-05-15T23:36:30.687017636Z" level=info msg="RemoveContainer for \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" returns successfully" May 15 23:36:30.687233 kubelet[2547]: I0515 23:36:30.687194 2547 scope.go:117] "RemoveContainer" containerID="18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885" May 15 23:36:30.688518 containerd[1455]: time="2025-05-15T23:36:30.688493967Z" level=info msg="RemoveContainer for \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\"" May 15 23:36:30.690870 containerd[1455]: time="2025-05-15T23:36:30.690841264Z" level=info msg="RemoveContainer for \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" returns successfully" May 15 23:36:30.691001 kubelet[2547]: I0515 23:36:30.690982 2547 scope.go:117] "RemoveContainer" containerID="66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca" May 15 23:36:30.695318 containerd[1455]: time="2025-05-15T23:36:30.695256177Z" level=error msg="ContainerStatus for \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\": not found" May 15 23:36:30.695598 kubelet[2547]: E0515 23:36:30.695427 2547 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\": not found" containerID="66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca" May 15 23:36:30.695598 kubelet[2547]: I0515 23:36:30.695455 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca"} err="failed to get container status \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"66b36ce5a7764037277cc6be3853caabae31172f17cae7d7d0252e90e86700ca\": not found" May 15 23:36:30.695598 kubelet[2547]: I0515 23:36:30.695476 2547 scope.go:117] "RemoveContainer" containerID="a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5" May 15 23:36:30.695708 containerd[1455]: time="2025-05-15T23:36:30.695665380Z" level=error msg="ContainerStatus for \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\": not found" May 15 23:36:30.695821 kubelet[2547]: E0515 23:36:30.695793 2547 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\": not found" containerID="a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5" May 15 23:36:30.695862 kubelet[2547]: I0515 23:36:30.695820 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5"} err="failed to get container status \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6161f84c4ef64f23508f5ac20abcb76e66b8a65ce4d1a6b01e862a370edadb5\": not found" May 15 23:36:30.695862 kubelet[2547]: I0515 23:36:30.695835 2547 scope.go:117] "RemoveContainer" containerID="357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd" May 15 23:36:30.696009 containerd[1455]: time="2025-05-15T23:36:30.695967982Z" level=error msg="ContainerStatus for \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\": not found" May 15 23:36:30.696118 kubelet[2547]: E0515 23:36:30.696087 2547 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\": not found" containerID="357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd" May 15 23:36:30.696160 kubelet[2547]: I0515 23:36:30.696115 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd"} err="failed to get container status \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"357d2f5391a439087193f70cde9ee3751964951110c4b70c28d758675608dbdd\": not found" May 15 23:36:30.696160 kubelet[2547]: I0515 23:36:30.696131 2547 scope.go:117] "RemoveContainer" containerID="d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0" May 15 23:36:30.696428 containerd[1455]: time="2025-05-15T23:36:30.696402866Z" level=error msg="ContainerStatus for \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\": not found" May 15 23:36:30.696554 kubelet[2547]: E0515 23:36:30.696524 2547 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\": not found" containerID="d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0" May 15 23:36:30.696708 kubelet[2547]: I0515 23:36:30.696615 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0"} err="failed to get container status \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d63a624bf5b60099542ec79300c3803784e0eab2bf37b37bb2cff50076ea93e0\": not found" May 15 23:36:30.696708 kubelet[2547]: I0515 23:36:30.696634 2547 scope.go:117] "RemoveContainer" containerID="18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885" May 15 23:36:30.696823 containerd[1455]: time="2025-05-15T23:36:30.696790229Z" level=error msg="ContainerStatus for \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\": not found" May 15 23:36:30.696928 kubelet[2547]: E0515 23:36:30.696909 2547 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\": not found" containerID="18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885" May 15 23:36:30.696967 kubelet[2547]: I0515 23:36:30.696933 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885"} err="failed to get container status \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\": rpc error: code = NotFound desc = an error occurred when try to find container \"18a9b69d8c5673967e5a17e0011a1ff7d57bc4dd4b90b0a9fd8cdaac6d082885\": not found" May 15 23:36:31.045967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d52d19a898e6478a9a228753d451618364345f31886008b6def921eb3fcd4b92-shm.mount: Deactivated successfully. May 15 23:36:31.046085 systemd[1]: var-lib-kubelet-pods-0f32535c\x2d9242\x2d4145\x2d871e\x2dd77c62b4288c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzgfnz.mount: Deactivated successfully. May 15 23:36:31.046143 systemd[1]: var-lib-kubelet-pods-2e75bc48\x2d994f\x2d4aba\x2d8ec9\x2d8c4fb25c7b06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7dsfq.mount: Deactivated successfully. May 15 23:36:31.046202 systemd[1]: var-lib-kubelet-pods-2e75bc48\x2d994f\x2d4aba\x2d8ec9\x2d8c4fb25c7b06-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 23:36:31.046255 systemd[1]: var-lib-kubelet-pods-2e75bc48\x2d994f\x2d4aba\x2d8ec9\x2d8c4fb25c7b06-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 23:36:31.435990 kubelet[2547]: I0515 23:36:31.435940 2547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f32535c-9242-4145-871e-d77c62b4288c" path="/var/lib/kubelet/pods/0f32535c-9242-4145-871e-d77c62b4288c/volumes" May 15 23:36:31.436362 kubelet[2547]: I0515 23:36:31.436342 2547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e75bc48-994f-4aba-8ec9-8c4fb25c7b06" path="/var/lib/kubelet/pods/2e75bc48-994f-4aba-8ec9-8c4fb25c7b06/volumes" May 15 23:36:31.482436 kubelet[2547]: E0515 23:36:31.482390 2547 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:36:31.955238 sshd[4133]: Connection closed by 10.0.0.1 port 60852 May 15 23:36:31.956975 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 15 23:36:31.969325 systemd[1]: Started sshd@22-10.0.0.93:22-10.0.0.1:60860.service - OpenSSH per-connection server daemon (10.0.0.1:60860). May 15 23:36:31.969779 systemd[1]: sshd@21-10.0.0.93:22-10.0.0.1:60852.service: Deactivated successfully. May 15 23:36:31.971588 systemd[1]: session-22.scope: Deactivated successfully. May 15 23:36:31.971832 systemd[1]: session-22.scope: Consumed 1.337s CPU time, 28.7M memory peak. May 15 23:36:31.973115 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. May 15 23:36:31.981958 systemd-logind[1436]: Removed session 22. May 15 23:36:32.027466 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 60860 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:32.028830 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:32.036279 systemd-logind[1436]: New session 23 of user core. May 15 23:36:32.042751 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 23:36:33.071873 sshd[4286]: Connection closed by 10.0.0.1 port 60860 May 15 23:36:33.071835 sshd-session[4281]: pam_unix(sshd:session): session closed for user core May 15 23:36:33.084427 systemd[1]: sshd@22-10.0.0.93:22-10.0.0.1:60860.service: Deactivated successfully. May 15 23:36:33.090244 systemd[1]: session-23.scope: Deactivated successfully. May 15 23:36:33.092765 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. May 15 23:36:33.097015 systemd[1]: Started sshd@23-10.0.0.93:22-10.0.0.1:55878.service - OpenSSH per-connection server daemon (10.0.0.1:55878). May 15 23:36:33.106084 systemd-logind[1436]: Removed session 23. May 15 23:36:33.116585 systemd[1]: Created slice kubepods-burstable-pod1d4441fe_5ae7_4f58_a485_114ad8be6305.slice - libcontainer container kubepods-burstable-pod1d4441fe_5ae7_4f58_a485_114ad8be6305.slice. May 15 23:36:33.153815 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 55878 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:33.155227 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:33.161653 systemd-logind[1436]: New session 24 of user core. May 15 23:36:33.167709 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 23:36:33.180804 kubelet[2547]: I0515 23:36:33.180755 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-xtables-lock\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.180804 kubelet[2547]: I0515 23:36:33.180804 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d4441fe-5ae7-4f58-a485-114ad8be6305-clustermesh-secrets\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181201 kubelet[2547]: I0515 23:36:33.180827 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1d4441fe-5ae7-4f58-a485-114ad8be6305-cilium-ipsec-secrets\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181201 kubelet[2547]: I0515 23:36:33.180843 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsz5\" (UniqueName: \"kubernetes.io/projected/1d4441fe-5ae7-4f58-a485-114ad8be6305-kube-api-access-lfsz5\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181201 kubelet[2547]: I0515 23:36:33.180862 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-host-proc-sys-kernel\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181201 kubelet[2547]: I0515 23:36:33.180876 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d4441fe-5ae7-4f58-a485-114ad8be6305-hubble-tls\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181201 kubelet[2547]: I0515 23:36:33.180892 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-cilium-run\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181201 kubelet[2547]: I0515 23:36:33.180906 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-bpf-maps\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181358 kubelet[2547]: I0515 23:36:33.180921 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-etc-cni-netd\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181358 kubelet[2547]: I0515 23:36:33.180935 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-lib-modules\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181358 kubelet[2547]: I0515 23:36:33.180949 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d4441fe-5ae7-4f58-a485-114ad8be6305-cilium-config-path\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181358 kubelet[2547]: I0515 23:36:33.180966 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-hostproc\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181358 kubelet[2547]: I0515 23:36:33.180982 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-host-proc-sys-net\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181358 kubelet[2547]: I0515 23:36:33.180995 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-cilium-cgroup\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.181488 kubelet[2547]: I0515 23:36:33.181010 2547 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d4441fe-5ae7-4f58-a485-114ad8be6305-cni-path\") pod \"cilium-6gtfw\" (UID: \"1d4441fe-5ae7-4f58-a485-114ad8be6305\") " pod="kube-system/cilium-6gtfw" May 15 23:36:33.217580 sshd[4300]: Connection closed by 10.0.0.1 port 55878 May 15 23:36:33.218095 sshd-session[4297]: pam_unix(sshd:session): session closed for user core May 15 23:36:33.230980 systemd[1]: sshd@23-10.0.0.93:22-10.0.0.1:55878.service: Deactivated successfully. May 15 23:36:33.232773 systemd[1]: session-24.scope: Deactivated successfully. May 15 23:36:33.233386 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. May 15 23:36:33.235480 systemd[1]: Started sshd@24-10.0.0.93:22-10.0.0.1:55886.service - OpenSSH per-connection server daemon (10.0.0.1:55886). May 15 23:36:33.236585 systemd-logind[1436]: Removed session 24. May 15 23:36:33.274487 kubelet[2547]: I0515 23:36:33.273748 2547 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T23:36:33Z","lastTransitionTime":"2025-05-15T23:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 23:36:33.294126 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 55886 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:36:33.297849 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:36:33.306923 systemd-logind[1436]: New session 25 of user core. May 15 23:36:33.313728 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 23:36:33.421807 kubelet[2547]: E0515 23:36:33.421767 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:33.422373 containerd[1455]: time="2025-05-15T23:36:33.422273916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gtfw,Uid:1d4441fe-5ae7-4f58-a485-114ad8be6305,Namespace:kube-system,Attempt:0,}" May 15 23:36:33.442773 containerd[1455]: time="2025-05-15T23:36:33.442715216Z" level=info msg="connecting to shim f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378" address="unix:///run/containerd/s/34520a2572a79aed716edf16d1f26040c69bc6aaeb955f45b95cdcfc160a5546" namespace=k8s.io protocol=ttrpc version=3 May 15 23:36:33.481790 systemd[1]: Started cri-containerd-f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378.scope - libcontainer container f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378. May 15 23:36:33.509390 containerd[1455]: time="2025-05-15T23:36:33.509349110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gtfw,Uid:1d4441fe-5ae7-4f58-a485-114ad8be6305,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\"" May 15 23:36:33.510458 kubelet[2547]: E0515 23:36:33.510428 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:33.518632 containerd[1455]: time="2025-05-15T23:36:33.518212330Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:36:33.532660 containerd[1455]: time="2025-05-15T23:36:33.532570548Z" level=info msg="Container da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46: CDI devices from CRI Config.CDIDevices: []" May 15 23:36:33.538698 containerd[1455]: time="2025-05-15T23:36:33.538633709Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\"" May 15 23:36:33.541114 containerd[1455]: time="2025-05-15T23:36:33.539866878Z" level=info msg="StartContainer for \"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\"" May 15 23:36:33.541114 containerd[1455]: time="2025-05-15T23:36:33.540821364Z" level=info msg="connecting to shim da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46" address="unix:///run/containerd/s/34520a2572a79aed716edf16d1f26040c69bc6aaeb955f45b95cdcfc160a5546" protocol=ttrpc version=3 May 15 23:36:33.559742 systemd[1]: Started cri-containerd-da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46.scope - libcontainer container da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46. May 15 23:36:33.591802 containerd[1455]: time="2025-05-15T23:36:33.591747431Z" level=info msg="StartContainer for \"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\" returns successfully" May 15 23:36:33.610083 systemd[1]: cri-containerd-da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46.scope: Deactivated successfully. May 15 23:36:33.612467 containerd[1455]: time="2025-05-15T23:36:33.612425372Z" level=info msg="received exit event container_id:\"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\" id:\"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\" pid:4380 exited_at:{seconds:1747352193 nanos:612028249}" May 15 23:36:33.612567 containerd[1455]: time="2025-05-15T23:36:33.612501293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\" id:\"da2f3ae18e0fd3bfabbfe1d125cd68d8e77278bbb184f4628fb757a52273de46\" pid:4380 exited_at:{seconds:1747352193 nanos:612028249}" May 15 23:36:33.645233 kubelet[2547]: E0515 23:36:33.645193 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:33.650498 containerd[1455]: time="2025-05-15T23:36:33.650447271Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:36:33.657549 containerd[1455]: time="2025-05-15T23:36:33.657495839Z" level=info msg="Container a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24: CDI devices from CRI Config.CDIDevices: []" May 15 23:36:33.665951 containerd[1455]: time="2025-05-15T23:36:33.665876856Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\"" May 15 23:36:33.666494 containerd[1455]: time="2025-05-15T23:36:33.666448140Z" level=info msg="StartContainer for \"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\"" May 15 23:36:33.667259 containerd[1455]: time="2025-05-15T23:36:33.667227706Z" level=info msg="connecting to shim a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24" address="unix:///run/containerd/s/34520a2572a79aed716edf16d1f26040c69bc6aaeb955f45b95cdcfc160a5546" protocol=ttrpc version=3 May 15 23:36:33.685719 systemd[1]: Started cri-containerd-a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24.scope - libcontainer container a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24. May 15 23:36:33.718824 containerd[1455]: time="2025-05-15T23:36:33.718784777Z" level=info msg="StartContainer for \"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\" returns successfully" May 15 23:36:33.734932 systemd[1]: cri-containerd-a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24.scope: Deactivated successfully. May 15 23:36:33.736295 containerd[1455]: time="2025-05-15T23:36:33.736258456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\" id:\"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\" pid:4425 exited_at:{seconds:1747352193 nanos:735839373}" May 15 23:36:33.743695 containerd[1455]: time="2025-05-15T23:36:33.743637506Z" level=info msg="received exit event container_id:\"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\" id:\"a30d7f33b74f48f7970005430125c7d118ab85f350de838584e9dd62fda3bc24\" pid:4425 exited_at:{seconds:1747352193 nanos:735839373}" May 15 23:36:34.649596 kubelet[2547]: E0515 23:36:34.649565 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:34.658756 containerd[1455]: time="2025-05-15T23:36:34.658703301Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:36:34.669429 containerd[1455]: time="2025-05-15T23:36:34.669189210Z" level=info msg="Container 1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8: CDI devices from CRI Config.CDIDevices: []" May 15 23:36:34.674944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597294136.mount: Deactivated successfully. May 15 23:36:34.680098 containerd[1455]: time="2025-05-15T23:36:34.679957361Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\"" May 15 23:36:34.680926 containerd[1455]: time="2025-05-15T23:36:34.680821127Z" level=info msg="StartContainer for \"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\"" May 15 23:36:34.683272 containerd[1455]: time="2025-05-15T23:36:34.683187583Z" level=info msg="connecting to shim 1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8" address="unix:///run/containerd/s/34520a2572a79aed716edf16d1f26040c69bc6aaeb955f45b95cdcfc160a5546" protocol=ttrpc version=3 May 15 23:36:34.705965 systemd[1]: Started cri-containerd-1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8.scope - libcontainer container 1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8. May 15 23:36:34.765797 systemd[1]: cri-containerd-1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8.scope: Deactivated successfully. May 15 23:36:34.767482 containerd[1455]: time="2025-05-15T23:36:34.767441982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\" id:\"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\" pid:4470 exited_at:{seconds:1747352194 nanos:767162980}" May 15 23:36:34.767666 containerd[1455]: time="2025-05-15T23:36:34.767477222Z" level=info msg="received exit event container_id:\"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\" id:\"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\" pid:4470 exited_at:{seconds:1747352194 nanos:767162980}" May 15 23:36:34.770000 containerd[1455]: time="2025-05-15T23:36:34.769946878Z" level=info msg="StartContainer for \"1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8\" returns successfully" May 15 23:36:34.788669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d14113698e359b99d8a308be5b24f5acdeeefa175453bd7eb5bf710b0d430e8-rootfs.mount: Deactivated successfully. May 15 23:36:35.433722 kubelet[2547]: E0515 23:36:35.433674 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:35.433963 kubelet[2547]: E0515 23:36:35.433827 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:35.655634 kubelet[2547]: E0515 23:36:35.655350 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:35.660665 containerd[1455]: time="2025-05-15T23:36:35.660478465Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:36:35.677331 containerd[1455]: time="2025-05-15T23:36:35.677272133Z" level=info msg="Container 07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1: CDI devices from CRI Config.CDIDevices: []" May 15 23:36:35.686583 containerd[1455]: time="2025-05-15T23:36:35.686041710Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\"" May 15 23:36:35.686583 containerd[1455]: time="2025-05-15T23:36:35.686515113Z" level=info msg="StartContainer for \"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\"" May 15 23:36:35.687971 containerd[1455]: time="2025-05-15T23:36:35.687931922Z" level=info msg="connecting to shim 07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1" address="unix:///run/containerd/s/34520a2572a79aed716edf16d1f26040c69bc6aaeb955f45b95cdcfc160a5546" protocol=ttrpc version=3 May 15 23:36:35.713754 systemd[1]: Started cri-containerd-07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1.scope - libcontainer container 07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1. May 15 23:36:35.737325 systemd[1]: cri-containerd-07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1.scope: Deactivated successfully. May 15 23:36:35.737889 containerd[1455]: time="2025-05-15T23:36:35.737741243Z" level=info msg="received exit event container_id:\"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\" id:\"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\" pid:4508 exited_at:{seconds:1747352195 nanos:737440561}" May 15 23:36:35.738448 containerd[1455]: time="2025-05-15T23:36:35.738414488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\" id:\"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\" pid:4508 exited_at:{seconds:1747352195 nanos:737440561}" May 15 23:36:35.745940 containerd[1455]: time="2025-05-15T23:36:35.745901776Z" level=info msg="StartContainer for \"07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1\" returns successfully" May 15 23:36:35.758580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07106834d70fb7ccd4cd8105e39997c11d00ee01431af2230519c0b3a094d3f1-rootfs.mount: Deactivated successfully. May 15 23:36:36.484081 kubelet[2547]: E0515 23:36:36.484045 2547 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:36:36.660247 kubelet[2547]: E0515 23:36:36.660062 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:36.665042 containerd[1455]: time="2025-05-15T23:36:36.664998471Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:36:36.677393 containerd[1455]: time="2025-05-15T23:36:36.676624144Z" level=info msg="Container b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e: CDI devices from CRI Config.CDIDevices: []" May 15 23:36:36.687007 containerd[1455]: time="2025-05-15T23:36:36.686951009Z" level=info msg="CreateContainer within sandbox \"f8fce2601f014257d73309e4f97712b61a760ab80b29b1965267510f35c72378\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\"" May 15 23:36:36.688552 containerd[1455]: time="2025-05-15T23:36:36.687449732Z" level=info msg="StartContainer for \"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\"" May 15 23:36:36.689660 containerd[1455]: time="2025-05-15T23:36:36.689622825Z" level=info msg="connecting to shim b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e" address="unix:///run/containerd/s/34520a2572a79aed716edf16d1f26040c69bc6aaeb955f45b95cdcfc160a5546" protocol=ttrpc version=3 May 15 23:36:36.708729 systemd[1]: Started cri-containerd-b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e.scope - libcontainer container b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e. May 15 23:36:36.740135 containerd[1455]: time="2025-05-15T23:36:36.740009502Z" level=info msg="StartContainer for \"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\" returns successfully" May 15 23:36:36.802405 containerd[1455]: time="2025-05-15T23:36:36.802241892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\" id:\"4cd867610c9f24dfe0f0f56c2610b9831bfdb322615da5dfaae12e749713dd3b\" pid:4579 exited_at:{seconds:1747352196 nanos:801936531}" May 15 23:36:37.045580 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 23:36:37.667255 kubelet[2547]: E0515 23:36:37.667192 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:37.685766 kubelet[2547]: I0515 23:36:37.685377 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6gtfw" podStartSLOduration=4.685359883 podStartE2EDuration="4.685359883s" podCreationTimestamp="2025-05-15 23:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:36:37.685026921 +0000 UTC m=+76.346640574" watchObservedRunningTime="2025-05-15 23:36:37.685359883 +0000 UTC m=+76.346973536" May 15 23:36:39.427457 kubelet[2547]: E0515 23:36:39.424349 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:39.777624 containerd[1455]: time="2025-05-15T23:36:39.777439618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\" id:\"2edd3b16fdd5664d221364cfc9238f97843e3249626c5547e7583dae10969b8b\" pid:5014 exit_status:1 exited_at:{seconds:1747352199 nanos:773833517}" May 15 23:36:39.893777 systemd-networkd[1396]: lxc_health: Link UP May 15 23:36:39.899940 systemd-networkd[1396]: lxc_health: Gained carrier May 15 23:36:41.412715 systemd-networkd[1396]: lxc_health: Gained IPv6LL May 15 23:36:41.424568 kubelet[2547]: E0515 23:36:41.424515 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:41.676785 kubelet[2547]: E0515 23:36:41.676667 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:41.899448 containerd[1455]: time="2025-05-15T23:36:41.899402007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\" id:\"bb1454fe144033a99ebbaf6c3dc61c5d554f839d8fc1a0a2a7262422317225d6\" pid:5121 exited_at:{seconds:1747352201 nanos:898969044}" May 15 23:36:42.679488 kubelet[2547]: E0515 23:36:42.679387 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:36:44.016805 containerd[1455]: time="2025-05-15T23:36:44.016751125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\" id:\"f528f9c392c903b729c5f9c313c377dddbbbf06be1d089703f9815aa01cc7e9e\" pid:5153 exited_at:{seconds:1747352204 nanos:15991761}" May 15 23:36:46.165217 containerd[1455]: time="2025-05-15T23:36:46.165171060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b72f60e337d2262025281f7101c4febc1ceb1de5a427cbc42af9ff38b279ef5e\" id:\"63e4147a78806586357946a9c4ddc5db746592eb93c9def29145693405f1bc68\" pid:5177 exited_at:{seconds:1747352206 nanos:164328896}" May 15 23:36:46.171977 sshd[4314]: Connection closed by 10.0.0.1 port 55886 May 15 23:36:46.172366 sshd-session[4306]: pam_unix(sshd:session): session closed for user core May 15 23:36:46.176189 systemd[1]: sshd@24-10.0.0.93:22-10.0.0.1:55886.service: Deactivated successfully. May 15 23:36:46.179225 systemd[1]: session-25.scope: Deactivated successfully. May 15 23:36:46.180194 systemd-logind[1436]: Session 25 logged out. Waiting for processes to exit. May 15 23:36:46.181179 systemd-logind[1436]: Removed session 25.