May 13 10:00:21.799783 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 10:00:21.799803 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 08:41:27 -00 2025 May 13 10:00:21.799812 kernel: KASLR enabled May 13 10:00:21.799818 kernel: efi: EFI v2.7 by EDK II May 13 10:00:21.799823 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 13 10:00:21.799828 kernel: random: crng init done May 13 10:00:21.799835 kernel: secureboot: Secure boot disabled May 13 10:00:21.799841 kernel: ACPI: Early table checksum verification disabled May 13 10:00:21.799857 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 13 10:00:21.799865 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 10:00:21.799871 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799877 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799882 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799888 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799895 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799902 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799908 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799914 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799920 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:00:21.799926 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 10:00:21.799932 kernel: ACPI: Use ACPI SPCR as default console: Yes May 13 10:00:21.799938 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 10:00:21.799944 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 13 10:00:21.799950 kernel: Zone ranges: May 13 10:00:21.799956 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 10:00:21.799963 kernel: DMA32 empty May 13 10:00:21.799969 kernel: Normal empty May 13 10:00:21.799975 kernel: Device empty May 13 10:00:21.799981 kernel: Movable zone start for each node May 13 10:00:21.799986 kernel: Early memory node ranges May 13 10:00:21.799993 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 13 10:00:21.799999 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 13 10:00:21.800005 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 13 10:00:21.800011 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 13 10:00:21.800017 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 13 10:00:21.800023 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 13 10:00:21.800029 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 13 10:00:21.800036 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 13 10:00:21.800042 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 13 10:00:21.800049 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 10:00:21.800057 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 10:00:21.800064 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 10:00:21.800070 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 10:00:21.800078 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 10:00:21.800084 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 10:00:21.800090 kernel: psci: probing for conduit method from ACPI. May 13 10:00:21.800097 kernel: psci: PSCIv1.1 detected in firmware. May 13 10:00:21.800103 kernel: psci: Using standard PSCI v0.2 function IDs May 13 10:00:21.800109 kernel: psci: Trusted OS migration not required May 13 10:00:21.800115 kernel: psci: SMC Calling Convention v1.1 May 13 10:00:21.800122 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 10:00:21.800128 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 13 10:00:21.800135 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 13 10:00:21.800143 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 10:00:21.800149 kernel: Detected PIPT I-cache on CPU0 May 13 10:00:21.800155 kernel: CPU features: detected: GIC system register CPU interface May 13 10:00:21.800162 kernel: CPU features: detected: Spectre-v4 May 13 10:00:21.800168 kernel: CPU features: detected: Spectre-BHB May 13 10:00:21.800175 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 10:00:21.800181 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 10:00:21.800188 kernel: CPU features: detected: ARM erratum 1418040 May 13 10:00:21.800194 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 10:00:21.800200 kernel: alternatives: applying boot alternatives May 13 10:00:21.800208 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3651514edeb4393ddaa415275e0af422804924552258e142c279f217f1c9042 May 13 10:00:21.800216 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 10:00:21.800222 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 10:00:21.800229 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 10:00:21.800235 kernel: Fallback order for Node 0: 0 May 13 10:00:21.800241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 13 10:00:21.800248 kernel: Policy zone: DMA May 13 10:00:21.800254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 10:00:21.800261 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 13 10:00:21.800267 kernel: software IO TLB: area num 4. May 13 10:00:21.800273 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 13 10:00:21.800280 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 13 10:00:21.800286 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 10:00:21.800294 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 10:00:21.800301 kernel: rcu: RCU event tracing is enabled. May 13 10:00:21.800308 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 10:00:21.800314 kernel: Trampoline variant of Tasks RCU enabled. May 13 10:00:21.800321 kernel: Tracing variant of Tasks RCU enabled. May 13 10:00:21.800327 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 10:00:21.800333 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 10:00:21.800340 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:00:21.800346 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:00:21.800353 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 10:00:21.800359 kernel: GICv3: 256 SPIs implemented May 13 10:00:21.800367 kernel: GICv3: 0 Extended SPIs implemented May 13 10:00:21.800373 kernel: Root IRQ handler: gic_handle_irq May 13 10:00:21.800379 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 10:00:21.800385 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 13 10:00:21.800392 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 10:00:21.800398 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 10:00:21.800405 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 13 10:00:21.800411 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 13 10:00:21.800418 kernel: GICv3: using LPI property table @0x0000000040100000 May 13 10:00:21.800424 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 13 10:00:21.800431 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 10:00:21.800437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:00:21.800445 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 10:00:21.800451 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 10:00:21.800458 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 10:00:21.800464 kernel: arm-pv: using stolen time PV May 13 10:00:21.800471 kernel: Console: colour dummy device 80x25 May 13 10:00:21.800478 kernel: ACPI: Core revision 20240827 May 13 10:00:21.800484 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 10:00:21.800491 kernel: pid_max: default: 32768 minimum: 301 May 13 10:00:21.800497 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 10:00:21.800505 kernel: landlock: Up and running. May 13 10:00:21.800529 kernel: SELinux: Initializing. May 13 10:00:21.800537 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:00:21.800544 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:00:21.800550 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 10:00:21.800557 kernel: rcu: Hierarchical SRCU implementation. May 13 10:00:21.800564 kernel: rcu: Max phase no-delay instances is 400. May 13 10:00:21.800571 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 10:00:21.800577 kernel: Remapping and enabling EFI services. May 13 10:00:21.800586 kernel: smp: Bringing up secondary CPUs ... May 13 10:00:21.800597 kernel: Detected PIPT I-cache on CPU1 May 13 10:00:21.800604 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 10:00:21.800613 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 13 10:00:21.800619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:00:21.800626 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 10:00:21.800633 kernel: Detected PIPT I-cache on CPU2 May 13 10:00:21.800640 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 10:00:21.800647 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 13 10:00:21.800655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:00:21.800662 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 10:00:21.800669 kernel: Detected PIPT I-cache on CPU3 May 13 10:00:21.800676 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 10:00:21.800682 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 13 10:00:21.800689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:00:21.800696 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 10:00:21.800703 kernel: smp: Brought up 1 node, 4 CPUs May 13 10:00:21.800710 kernel: SMP: Total of 4 processors activated. May 13 10:00:21.800718 kernel: CPU: All CPU(s) started at EL1 May 13 10:00:21.800725 kernel: CPU features: detected: 32-bit EL0 Support May 13 10:00:21.800731 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 10:00:21.800738 kernel: CPU features: detected: Common not Private translations May 13 10:00:21.800745 kernel: CPU features: detected: CRC32 instructions May 13 10:00:21.800752 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 10:00:21.800759 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 10:00:21.800766 kernel: CPU features: detected: LSE atomic instructions May 13 10:00:21.800773 kernel: CPU features: detected: Privileged Access Never May 13 10:00:21.800781 kernel: CPU features: detected: RAS Extension Support May 13 10:00:21.800788 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 10:00:21.800794 kernel: alternatives: applying system-wide alternatives May 13 10:00:21.800801 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 13 10:00:21.800809 kernel: Memory: 2440920K/2572288K available (11072K kernel code, 2276K rwdata, 8932K rodata, 39488K init, 1034K bss, 125600K reserved, 0K cma-reserved) May 13 10:00:21.800816 kernel: devtmpfs: initialized May 13 10:00:21.800823 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 10:00:21.800830 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 10:00:21.800837 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 10:00:21.800851 kernel: 0 pages in range for non-PLT usage May 13 10:00:21.800858 kernel: 508528 pages in range for PLT usage May 13 10:00:21.800865 kernel: pinctrl core: initialized pinctrl subsystem May 13 10:00:21.800872 kernel: SMBIOS 3.0.0 present. May 13 10:00:21.800878 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 10:00:21.800885 kernel: DMI: Memory slots populated: 1/1 May 13 10:00:21.800892 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 10:00:21.800899 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 10:00:21.800906 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 10:00:21.800914 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 10:00:21.800921 kernel: audit: initializing netlink subsys (disabled) May 13 10:00:21.800928 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 May 13 10:00:21.800935 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 10:00:21.800942 kernel: cpuidle: using governor menu May 13 10:00:21.800949 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 10:00:21.800956 kernel: ASID allocator initialised with 32768 entries May 13 10:00:21.800963 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 10:00:21.800970 kernel: Serial: AMBA PL011 UART driver May 13 10:00:21.800978 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 10:00:21.800985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 10:00:21.800991 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 10:00:21.800998 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 10:00:21.801005 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 10:00:21.801012 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 10:00:21.801019 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 10:00:21.801026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 10:00:21.801033 kernel: ACPI: Added _OSI(Module Device) May 13 10:00:21.801040 kernel: ACPI: Added _OSI(Processor Device) May 13 10:00:21.801047 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 10:00:21.801054 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 10:00:21.801061 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 10:00:21.801068 kernel: ACPI: Interpreter enabled May 13 10:00:21.801075 kernel: ACPI: Using GIC for interrupt routing May 13 10:00:21.801081 kernel: ACPI: MCFG table detected, 1 entries May 13 10:00:21.801088 kernel: ACPI: CPU0 has been hot-added May 13 10:00:21.801095 kernel: ACPI: CPU1 has been hot-added May 13 10:00:21.801103 kernel: ACPI: CPU2 has been hot-added May 13 10:00:21.801109 kernel: ACPI: CPU3 has been hot-added May 13 10:00:21.801116 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 10:00:21.801123 kernel: printk: legacy console [ttyAMA0] enabled May 13 10:00:21.801130 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 10:00:21.801259 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 10:00:21.801324 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 10:00:21.801382 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 10:00:21.801441 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 10:00:21.801497 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 10:00:21.801506 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 10:00:21.801573 kernel: PCI host bridge to bus 0000:00 May 13 10:00:21.801649 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 10:00:21.801707 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 10:00:21.801760 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 10:00:21.801814 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 10:00:21.801904 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 13 10:00:21.801976 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 10:00:21.802036 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 13 10:00:21.802095 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 13 10:00:21.802153 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 13 10:00:21.802211 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 13 10:00:21.802273 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 13 10:00:21.802331 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 13 10:00:21.802383 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 10:00:21.802434 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 10:00:21.802486 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 10:00:21.802494 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 10:00:21.802502 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 10:00:21.802523 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 10:00:21.802531 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 10:00:21.802538 kernel: iommu: Default domain type: Translated May 13 10:00:21.802545 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 10:00:21.802552 kernel: efivars: Registered efivars operations May 13 10:00:21.802559 kernel: vgaarb: loaded May 13 10:00:21.802566 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 10:00:21.802573 kernel: VFS: Disk quotas dquot_6.6.0 May 13 10:00:21.802580 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 10:00:21.802589 kernel: pnp: PnP ACPI init May 13 10:00:21.802664 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 10:00:21.802674 kernel: pnp: PnP ACPI: found 1 devices May 13 10:00:21.802681 kernel: NET: Registered PF_INET protocol family May 13 10:00:21.802688 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 10:00:21.802695 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 10:00:21.802702 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 10:00:21.802709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 10:00:21.802719 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 10:00:21.802726 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 10:00:21.802732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:00:21.802739 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:00:21.802746 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 10:00:21.802753 kernel: PCI: CLS 0 bytes, default 64 May 13 10:00:21.802760 kernel: kvm [1]: HYP mode not available May 13 10:00:21.802767 kernel: Initialise system trusted keyrings May 13 10:00:21.802774 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 10:00:21.802782 kernel: Key type asymmetric registered May 13 10:00:21.802789 kernel: Asymmetric key parser 'x509' registered May 13 10:00:21.802797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 10:00:21.802804 kernel: io scheduler mq-deadline registered May 13 10:00:21.802811 kernel: io scheduler kyber registered May 13 10:00:21.802818 kernel: io scheduler bfq registered May 13 10:00:21.802825 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 10:00:21.802832 kernel: ACPI: button: Power Button [PWRB] May 13 10:00:21.802839 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 10:00:21.802919 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 10:00:21.802929 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 10:00:21.802936 kernel: thunder_xcv, ver 1.0 May 13 10:00:21.802943 kernel: thunder_bgx, ver 1.0 May 13 10:00:21.802949 kernel: nicpf, ver 1.0 May 13 10:00:21.802956 kernel: nicvf, ver 1.0 May 13 10:00:21.803025 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 10:00:21.803081 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T10:00:21 UTC (1747130421) May 13 10:00:21.803092 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 10:00:21.803099 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 13 10:00:21.803106 kernel: watchdog: NMI not fully supported May 13 10:00:21.803113 kernel: watchdog: Hard watchdog permanently disabled May 13 10:00:21.803120 kernel: NET: Registered PF_INET6 protocol family May 13 10:00:21.803127 kernel: Segment Routing with IPv6 May 13 10:00:21.803134 kernel: In-situ OAM (IOAM) with IPv6 May 13 10:00:21.803141 kernel: NET: Registered PF_PACKET protocol family May 13 10:00:21.803147 kernel: Key type dns_resolver registered May 13 10:00:21.803155 kernel: registered taskstats version 1 May 13 10:00:21.803162 kernel: Loading compiled-in X.509 certificates May 13 10:00:21.803169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d18e2d911aaed50d8aae6c7998623d31780af195' May 13 10:00:21.803176 kernel: Demotion targets for Node 0: null May 13 10:00:21.803183 kernel: Key type .fscrypt registered May 13 10:00:21.803190 kernel: Key type fscrypt-provisioning registered May 13 10:00:21.803197 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 10:00:21.803203 kernel: ima: Allocated hash algorithm: sha1 May 13 10:00:21.803211 kernel: ima: No architecture policies found May 13 10:00:21.803219 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 10:00:21.803226 kernel: clk: Disabling unused clocks May 13 10:00:21.803232 kernel: PM: genpd: Disabling unused power domains May 13 10:00:21.803239 kernel: Warning: unable to open an initial console. May 13 10:00:21.803247 kernel: Freeing unused kernel memory: 39488K May 13 10:00:21.803253 kernel: Run /init as init process May 13 10:00:21.803260 kernel: with arguments: May 13 10:00:21.803267 kernel: /init May 13 10:00:21.803274 kernel: with environment: May 13 10:00:21.803281 kernel: HOME=/ May 13 10:00:21.803288 kernel: TERM=linux May 13 10:00:21.803295 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 10:00:21.803303 systemd[1]: Successfully made /usr/ read-only. May 13 10:00:21.803312 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:00:21.803321 systemd[1]: Detected virtualization kvm. May 13 10:00:21.803328 systemd[1]: Detected architecture arm64. May 13 10:00:21.803336 systemd[1]: Running in initrd. May 13 10:00:21.803343 systemd[1]: No hostname configured, using default hostname. May 13 10:00:21.803351 systemd[1]: Hostname set to . May 13 10:00:21.803359 systemd[1]: Initializing machine ID from VM UUID. May 13 10:00:21.803366 systemd[1]: Queued start job for default target initrd.target. May 13 10:00:21.803374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:00:21.803381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:00:21.803389 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 10:00:21.803398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:00:21.803406 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 10:00:21.803414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 10:00:21.803423 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 10:00:21.803430 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 10:00:21.803438 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:00:21.803446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:00:21.803454 systemd[1]: Reached target paths.target - Path Units. May 13 10:00:21.803462 systemd[1]: Reached target slices.target - Slice Units. May 13 10:00:21.803469 systemd[1]: Reached target swap.target - Swaps. May 13 10:00:21.803476 systemd[1]: Reached target timers.target - Timer Units. May 13 10:00:21.803484 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:00:21.803491 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:00:21.803499 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 10:00:21.803506 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 10:00:21.803534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:00:21.803544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:00:21.803552 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:00:21.803559 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:00:21.803567 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 10:00:21.803574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:00:21.803582 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 10:00:21.803590 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 10:00:21.803597 systemd[1]: Starting systemd-fsck-usr.service... May 13 10:00:21.803606 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:00:21.803613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:00:21.803621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:00:21.803628 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:00:21.803636 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 10:00:21.803645 systemd[1]: Finished systemd-fsck-usr.service. May 13 10:00:21.803652 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 10:00:21.803677 systemd-journald[245]: Collecting audit messages is disabled. May 13 10:00:21.803702 systemd-journald[245]: Journal started May 13 10:00:21.803721 systemd-journald[245]: Runtime Journal (/run/log/journal/c4ec27b38a2c4a4fb864511c1f92d41b) is 6M, max 48.5M, 42.4M free. May 13 10:00:21.794694 systemd-modules-load[246]: Inserted module 'overlay' May 13 10:00:21.805134 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:00:21.808637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:00:21.811655 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 10:00:21.811107 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:00:21.815080 systemd-modules-load[246]: Inserted module 'br_netfilter' May 13 10:00:21.815757 kernel: Bridge firewalling registered May 13 10:00:21.815269 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 10:00:21.817306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:00:21.819126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:00:21.827665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:00:21.829929 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:00:21.833712 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 10:00:21.835499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:00:21.836578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:00:21.842560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:00:21.844520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:00:21.845364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:00:21.847506 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 10:00:21.869598 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3651514edeb4393ddaa415275e0af422804924552258e142c279f217f1c9042 May 13 10:00:21.884374 systemd-resolved[287]: Positive Trust Anchors: May 13 10:00:21.884393 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:00:21.884426 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:00:21.889180 systemd-resolved[287]: Defaulting to hostname 'linux'. May 13 10:00:21.890173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:00:21.894250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:00:21.950542 kernel: SCSI subsystem initialized May 13 10:00:21.954531 kernel: Loading iSCSI transport class v2.0-870. May 13 10:00:21.963528 kernel: iscsi: registered transport (tcp) May 13 10:00:21.978559 kernel: iscsi: registered transport (qla4xxx) May 13 10:00:21.978600 kernel: QLogic iSCSI HBA Driver May 13 10:00:21.995838 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:00:22.010541 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:00:22.011714 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:00:22.055791 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 10:00:22.057691 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 10:00:22.117535 kernel: raid6: neonx8 gen() 15773 MB/s May 13 10:00:22.134532 kernel: raid6: neonx4 gen() 15729 MB/s May 13 10:00:22.151522 kernel: raid6: neonx2 gen() 13164 MB/s May 13 10:00:22.168535 kernel: raid6: neonx1 gen() 10516 MB/s May 13 10:00:22.185534 kernel: raid6: int64x8 gen() 6881 MB/s May 13 10:00:22.202534 kernel: raid6: int64x4 gen() 7338 MB/s May 13 10:00:22.219523 kernel: raid6: int64x2 gen() 6087 MB/s May 13 10:00:22.236527 kernel: raid6: int64x1 gen() 5030 MB/s May 13 10:00:22.236542 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s May 13 10:00:22.253535 kernel: raid6: .... xor() 12037 MB/s, rmw enabled May 13 10:00:22.253551 kernel: raid6: using neon recovery algorithm May 13 10:00:22.258817 kernel: xor: measuring software checksum speed May 13 10:00:22.258859 kernel: 8regs : 20770 MB/sec May 13 10:00:22.258880 kernel: 32regs : 21693 MB/sec May 13 10:00:22.259747 kernel: arm64_neon : 27105 MB/sec May 13 10:00:22.259760 kernel: xor: using function: arm64_neon (27105 MB/sec) May 13 10:00:22.310533 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 10:00:22.317158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 10:00:22.319263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:00:22.347231 systemd-udevd[497]: Using default interface naming scheme 'v255'. May 13 10:00:22.351432 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:00:22.353538 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 10:00:22.376595 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation May 13 10:00:22.397255 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:00:22.399404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:00:22.451628 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:00:22.454102 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 10:00:22.498094 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 10:00:22.498278 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 10:00:22.501822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 10:00:22.501866 kernel: GPT:9289727 != 19775487 May 13 10:00:22.501879 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 10:00:22.501888 kernel: GPT:9289727 != 19775487 May 13 10:00:22.502531 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 10:00:22.505294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:00:22.511459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:00:22.505407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:00:22.511466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:00:22.513819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:00:22.540587 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 10:00:22.541671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:00:22.549567 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 10:00:22.557490 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 10:00:22.567505 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 10:00:22.568381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 10:00:22.577211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:00:22.578146 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:00:22.579894 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:00:22.581664 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:00:22.584058 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 10:00:22.585795 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 10:00:22.606273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 10:00:22.616704 disk-uuid[591]: Primary Header is updated. May 13 10:00:22.616704 disk-uuid[591]: Secondary Entries is updated. May 13 10:00:22.616704 disk-uuid[591]: Secondary Header is updated. May 13 10:00:22.620039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:00:23.627804 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:00:23.630262 disk-uuid[599]: The operation has completed successfully. May 13 10:00:23.631558 kernel: block device autoloading is deprecated and will be removed. May 13 10:00:23.658804 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 10:00:23.658913 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 10:00:23.682569 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 10:00:23.704485 sh[613]: Success May 13 10:00:23.718804 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 10:00:23.718842 kernel: device-mapper: uevent: version 1.0.3 May 13 10:00:23.718853 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 10:00:23.730540 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 13 10:00:23.756678 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 10:00:23.758969 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 10:00:23.772670 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 10:00:23.777996 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 10:00:23.778035 kernel: BTRFS: device fsid a7f3e58b-f7f0-457e-beaa-7636cc7d4568 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (625) May 13 10:00:23.779024 kernel: BTRFS info (device dm-0): first mount of filesystem a7f3e58b-f7f0-457e-beaa-7636cc7d4568 May 13 10:00:23.779052 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 10:00:23.780521 kernel: BTRFS info (device dm-0): using free-space-tree May 13 10:00:23.784894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 10:00:23.785841 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 10:00:23.786870 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 10:00:23.787496 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 10:00:23.788738 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 10:00:23.810528 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (656) May 13 10:00:23.812082 kernel: BTRFS info (device vda6): first mount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:00:23.812116 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 10:00:23.812126 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:00:23.818528 kernel: BTRFS info (device vda6): last unmount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:00:23.818976 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 10:00:23.821428 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 10:00:23.889492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:00:23.892087 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:00:23.930947 systemd-networkd[803]: lo: Link UP May 13 10:00:23.930959 systemd-networkd[803]: lo: Gained carrier May 13 10:00:23.931650 systemd-networkd[803]: Enumeration completed May 13 10:00:23.931793 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:00:23.932981 systemd[1]: Reached target network.target - Network. May 13 10:00:23.933805 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:00:23.933809 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:00:23.934191 systemd-networkd[803]: eth0: Link UP May 13 10:00:23.934194 systemd-networkd[803]: eth0: Gained carrier May 13 10:00:23.934201 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:00:23.949569 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:00:23.957701 ignition[699]: Ignition 2.21.0 May 13 10:00:23.957714 ignition[699]: Stage: fetch-offline May 13 10:00:23.957748 ignition[699]: no configs at "/usr/lib/ignition/base.d" May 13 10:00:23.957755 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:00:23.957959 ignition[699]: parsed url from cmdline: "" May 13 10:00:23.957963 ignition[699]: no config URL provided May 13 10:00:23.957967 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" May 13 10:00:23.957973 ignition[699]: no config at "/usr/lib/ignition/user.ign" May 13 10:00:23.957990 ignition[699]: op(1): [started] loading QEMU firmware config module May 13 10:00:23.957994 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 10:00:23.972577 ignition[699]: op(1): [finished] loading QEMU firmware config module May 13 10:00:23.972601 ignition[699]: QEMU firmware config was not found. Ignoring... May 13 10:00:24.009713 ignition[699]: parsing config with SHA512: 5f9ae50cba8e49259b15546b21444fa2337b80ac6ab71de70ce20d1c9eabbaf9767b386ce7462708f8acca6f2d94a00891aac7d6611db4074f7be1eb8172c453 May 13 10:00:24.015767 unknown[699]: fetched base config from "system" May 13 10:00:24.015777 unknown[699]: fetched user config from "qemu" May 13 10:00:24.016363 ignition[699]: fetch-offline: fetch-offline passed May 13 10:00:24.016426 ignition[699]: Ignition finished successfully May 13 10:00:24.018240 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:00:24.019809 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 10:00:24.020548 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 10:00:24.051661 ignition[816]: Ignition 2.21.0 May 13 10:00:24.051675 ignition[816]: Stage: kargs May 13 10:00:24.051846 ignition[816]: no configs at "/usr/lib/ignition/base.d" May 13 10:00:24.051860 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:00:24.053178 ignition[816]: kargs: kargs passed May 13 10:00:24.053232 ignition[816]: Ignition finished successfully May 13 10:00:24.055700 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 10:00:24.057368 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 10:00:24.081678 ignition[824]: Ignition 2.21.0 May 13 10:00:24.081695 ignition[824]: Stage: disks May 13 10:00:24.081846 ignition[824]: no configs at "/usr/lib/ignition/base.d" May 13 10:00:24.081855 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:00:24.084504 ignition[824]: disks: disks passed May 13 10:00:24.084573 ignition[824]: Ignition finished successfully May 13 10:00:24.086885 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 10:00:24.087723 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 10:00:24.088930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 10:00:24.090465 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:00:24.091895 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:00:24.093160 systemd[1]: Reached target basic.target - Basic System. May 13 10:00:24.095266 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 10:00:24.126241 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 10:00:24.130080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 10:00:24.131856 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 10:00:24.213541 kernel: EXT4-fs (vda9): mounted filesystem 70c9b161-a0a5-4b0a-87a4-ca4044b4e9ba r/w with ordered data mode. Quota mode: none. May 13 10:00:24.213573 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 10:00:24.214671 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 10:00:24.218306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:00:24.219711 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 10:00:24.220453 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 10:00:24.220493 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 10:00:24.220533 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:00:24.237880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 10:00:24.240106 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 10:00:24.242528 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (843) May 13 10:00:24.243531 kernel: BTRFS info (device vda6): first mount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:00:24.243545 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 10:00:24.243555 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:00:24.246081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:00:24.288734 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory May 13 10:00:24.291922 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory May 13 10:00:24.295375 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory May 13 10:00:24.299048 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory May 13 10:00:24.385122 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 10:00:24.387248 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 10:00:24.389287 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 10:00:24.402535 kernel: BTRFS info (device vda6): last unmount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:00:24.421647 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 10:00:24.433320 ignition[958]: INFO : Ignition 2.21.0 May 13 10:00:24.433320 ignition[958]: INFO : Stage: mount May 13 10:00:24.435451 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:00:24.435451 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:00:24.435451 ignition[958]: INFO : mount: mount passed May 13 10:00:24.435451 ignition[958]: INFO : Ignition finished successfully May 13 10:00:24.436945 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 10:00:24.439616 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 10:00:24.905122 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 10:00:24.906701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:00:24.931940 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (969) May 13 10:00:24.931985 kernel: BTRFS info (device vda6): first mount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:00:24.931995 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 10:00:24.932586 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:00:24.935456 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:00:24.964068 ignition[986]: INFO : Ignition 2.21.0 May 13 10:00:24.965173 ignition[986]: INFO : Stage: files May 13 10:00:24.965921 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:00:24.965921 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:00:24.967499 ignition[986]: DEBUG : files: compiled without relabeling support, skipping May 13 10:00:24.968498 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 10:00:24.968498 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 10:00:24.970957 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 10:00:24.971922 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 10:00:24.971922 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 10:00:24.971469 unknown[986]: wrote ssh authorized keys file for user: core May 13 10:00:24.974652 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 10:00:24.976096 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 10:00:25.138179 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 10:00:25.512865 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 10:00:25.514363 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 10:00:25.514363 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 10:00:25.733664 systemd-networkd[803]: eth0: Gained IPv6LL May 13 10:00:25.892304 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 10:00:26.035186 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 10:00:26.035186 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:00:26.038318 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:00:26.047136 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:00:26.047136 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:00:26.047136 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:00:26.047136 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:00:26.047136 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:00:26.047136 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 10:00:26.322138 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 10:00:26.563027 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:00:26.563027 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 10:00:26.566285 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 10:00:26.584454 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:00:26.588115 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:00:26.590372 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 10:00:26.590372 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 10:00:26.590372 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 10:00:26.590372 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 10:00:26.590372 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 10:00:26.590372 ignition[986]: INFO : files: files passed May 13 10:00:26.590372 ignition[986]: INFO : Ignition finished successfully May 13 10:00:26.591119 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 10:00:26.593442 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 10:00:26.596655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 10:00:26.610218 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 10:00:26.610332 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 10:00:26.612845 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory May 13 10:00:26.614662 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:00:26.614662 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 10:00:26.617601 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:00:26.617174 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:00:26.618893 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 10:00:26.622038 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 10:00:26.656651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 10:00:26.656770 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 10:00:26.658430 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 10:00:26.659870 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 10:00:26.661370 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 10:00:26.662162 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 10:00:26.693841 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:00:26.695994 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 10:00:26.714276 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 10:00:26.715325 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:00:26.717110 systemd[1]: Stopped target timers.target - Timer Units. May 13 10:00:26.718637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 10:00:26.718756 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:00:26.720990 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 10:00:26.722682 systemd[1]: Stopped target basic.target - Basic System. May 13 10:00:26.724104 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 10:00:26.725537 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:00:26.727225 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 10:00:26.728902 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 10:00:26.730505 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 10:00:26.732149 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:00:26.733804 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 10:00:26.735484 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 10:00:26.736991 systemd[1]: Stopped target swap.target - Swaps. May 13 10:00:26.738325 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 10:00:26.738446 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 10:00:26.740521 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 10:00:26.742244 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:00:26.744042 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 10:00:26.747601 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:00:26.748541 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 10:00:26.748662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 10:00:26.751241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 10:00:26.751355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:00:26.753069 systemd[1]: Stopped target paths.target - Path Units. May 13 10:00:26.754460 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 10:00:26.759588 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:00:26.760641 systemd[1]: Stopped target slices.target - Slice Units. May 13 10:00:26.762577 systemd[1]: Stopped target sockets.target - Socket Units. May 13 10:00:26.764055 systemd[1]: iscsid.socket: Deactivated successfully. May 13 10:00:26.764142 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:00:26.765547 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 10:00:26.765625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:00:26.767043 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 10:00:26.767159 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:00:26.768690 systemd[1]: ignition-files.service: Deactivated successfully. May 13 10:00:26.768793 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 10:00:26.771676 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 10:00:26.772776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 10:00:26.772902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:00:26.790061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 10:00:26.790729 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 10:00:26.790855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:00:26.792419 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 10:00:26.792539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:00:26.798945 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 10:00:26.799042 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 10:00:26.802737 ignition[1041]: INFO : Ignition 2.21.0 May 13 10:00:26.802737 ignition[1041]: INFO : Stage: umount May 13 10:00:26.805698 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:00:26.805698 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:00:26.805698 ignition[1041]: INFO : umount: umount passed May 13 10:00:26.805698 ignition[1041]: INFO : Ignition finished successfully May 13 10:00:26.804916 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 10:00:26.809439 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 10:00:26.809589 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 10:00:26.811401 systemd[1]: Stopped target network.target - Network. May 13 10:00:26.812616 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 10:00:26.812673 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 10:00:26.814183 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 10:00:26.814225 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 10:00:26.815577 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 10:00:26.815625 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 10:00:26.817000 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 10:00:26.817038 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 10:00:26.818568 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 10:00:26.820235 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 10:00:26.825323 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 10:00:26.825422 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 10:00:26.829163 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 10:00:26.829369 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 10:00:26.829449 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 10:00:26.831974 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 10:00:26.832041 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 10:00:26.832963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 10:00:26.833005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:00:26.836025 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 10:00:26.837382 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 10:00:26.837496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 10:00:26.839988 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 10:00:26.840169 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 10:00:26.841404 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 10:00:26.841439 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 10:00:26.844661 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 10:00:26.845781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 10:00:26.845836 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:00:26.847688 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 10:00:26.847727 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 10:00:26.850259 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 10:00:26.850300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 10:00:26.852142 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:00:26.855783 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 10:00:26.868011 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 10:00:26.868158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:00:26.870255 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 10:00:26.870318 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 10:00:26.871850 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 10:00:26.871937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:00:26.873409 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 10:00:26.873457 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 10:00:26.875903 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 10:00:26.875947 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 10:00:26.878463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 10:00:26.878532 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:00:26.882759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 10:00:26.883781 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 10:00:26.883860 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:00:26.886809 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 10:00:26.886858 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:00:26.890630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:00:26.890671 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:00:26.893950 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 10:00:26.898693 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 10:00:26.903625 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 10:00:26.903730 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 10:00:26.906085 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 10:00:26.908347 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 10:00:26.926114 systemd[1]: Switching root. May 13 10:00:26.955700 systemd-journald[245]: Journal stopped May 13 10:00:27.705134 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). May 13 10:00:27.705193 kernel: SELinux: policy capability network_peer_controls=1 May 13 10:00:27.705208 kernel: SELinux: policy capability open_perms=1 May 13 10:00:27.705220 kernel: SELinux: policy capability extended_socket_class=1 May 13 10:00:27.705231 kernel: SELinux: policy capability always_check_network=0 May 13 10:00:27.705240 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 10:00:27.705249 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 10:00:27.705258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 10:00:27.705267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 10:00:27.705277 kernel: SELinux: policy capability userspace_initial_context=0 May 13 10:00:27.705286 kernel: audit: type=1403 audit(1747130427.133:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 10:00:27.705300 systemd[1]: Successfully loaded SELinux policy in 44.214ms. May 13 10:00:27.705319 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.563ms. May 13 10:00:27.705332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:00:27.705346 systemd[1]: Detected virtualization kvm. May 13 10:00:27.705356 systemd[1]: Detected architecture arm64. May 13 10:00:27.705366 systemd[1]: Detected first boot. May 13 10:00:27.705376 systemd[1]: Initializing machine ID from VM UUID. May 13 10:00:27.705395 kernel: NET: Registered PF_VSOCK protocol family May 13 10:00:27.705407 zram_generator::config[1089]: No configuration found. May 13 10:00:27.705418 systemd[1]: Populated /etc with preset unit settings. May 13 10:00:27.705431 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 10:00:27.705441 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 10:00:27.705451 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 10:00:27.705462 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 10:00:27.705472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 10:00:27.705482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 10:00:27.705495 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 10:00:27.705505 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 10:00:27.705527 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 10:00:27.705552 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 10:00:27.705563 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 10:00:27.705573 systemd[1]: Created slice user.slice - User and Session Slice. May 13 10:00:27.705584 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:00:27.705595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:00:27.705606 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 10:00:27.705616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 10:00:27.705627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 10:00:27.705638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:00:27.705649 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 10:00:27.705659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:00:27.705669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:00:27.705680 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 10:00:27.705690 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 10:00:27.705699 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 10:00:27.705710 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 10:00:27.705721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:00:27.705732 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:00:27.705742 systemd[1]: Reached target slices.target - Slice Units. May 13 10:00:27.705752 systemd[1]: Reached target swap.target - Swaps. May 13 10:00:27.705763 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 10:00:27.705773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 10:00:27.705784 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 10:00:27.705794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:00:27.705805 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:00:27.705824 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:00:27.705838 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 10:00:27.705851 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 10:00:27.705862 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 10:00:27.705874 systemd[1]: Mounting media.mount - External Media Directory... May 13 10:00:27.705884 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 10:00:27.705894 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 10:00:27.705903 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 10:00:27.705914 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 10:00:27.705925 systemd[1]: Reached target machines.target - Containers. May 13 10:00:27.705935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 10:00:27.705945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:00:27.705955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:00:27.705965 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 10:00:27.705976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:00:27.705985 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:00:27.705995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:00:27.706005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 10:00:27.706017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:00:27.706027 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 10:00:27.706037 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 10:00:27.706049 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 10:00:27.706059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 10:00:27.706069 systemd[1]: Stopped systemd-fsck-usr.service. May 13 10:00:27.706080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:00:27.706091 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:00:27.706102 kernel: loop: module loaded May 13 10:00:27.706112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:00:27.706121 kernel: fuse: init (API version 7.41) May 13 10:00:27.706130 kernel: ACPI: bus type drm_connector registered May 13 10:00:27.706140 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:00:27.706150 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 10:00:27.706160 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 10:00:27.706171 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:00:27.706181 systemd[1]: verity-setup.service: Deactivated successfully. May 13 10:00:27.706193 systemd[1]: Stopped verity-setup.service. May 13 10:00:27.706203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 10:00:27.706213 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 10:00:27.706223 systemd[1]: Mounted media.mount - External Media Directory. May 13 10:00:27.706233 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 10:00:27.706244 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 10:00:27.706254 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 10:00:27.706264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 10:00:27.706274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:00:27.706309 systemd-journald[1157]: Collecting audit messages is disabled. May 13 10:00:27.706333 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 10:00:27.706345 systemd-journald[1157]: Journal started May 13 10:00:27.706366 systemd-journald[1157]: Runtime Journal (/run/log/journal/c4ec27b38a2c4a4fb864511c1f92d41b) is 6M, max 48.5M, 42.4M free. May 13 10:00:27.509757 systemd[1]: Queued start job for default target multi-user.target. May 13 10:00:27.518484 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 10:00:27.518864 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 10:00:27.708539 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 10:00:27.710538 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:00:27.710884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:00:27.711039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:00:27.712118 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:00:27.712262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:00:27.714801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:00:27.714978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:00:27.716171 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 10:00:27.716328 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 10:00:27.717399 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:00:27.717571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:00:27.718624 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:00:27.719704 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:00:27.721002 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 10:00:27.722168 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 10:00:27.734093 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:00:27.736183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 10:00:27.737986 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 10:00:27.738892 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 10:00:27.738919 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:00:27.740532 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 10:00:27.747271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 10:00:27.748191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:00:27.749475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 10:00:27.751373 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 10:00:27.752464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:00:27.753566 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 10:00:27.754389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:00:27.755543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:00:27.758681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 10:00:27.761666 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 10:00:27.763991 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:00:27.766413 systemd-journald[1157]: Time spent on flushing to /var/log/journal/c4ec27b38a2c4a4fb864511c1f92d41b is 32.269ms for 890 entries. May 13 10:00:27.766413 systemd-journald[1157]: System Journal (/var/log/journal/c4ec27b38a2c4a4fb864511c1f92d41b) is 8M, max 195.6M, 187.6M free. May 13 10:00:27.811496 systemd-journald[1157]: Received client request to flush runtime journal. May 13 10:00:27.811583 kernel: loop0: detected capacity change from 0 to 138376 May 13 10:00:27.811607 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 10:00:27.767612 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 10:00:27.769459 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 10:00:27.772074 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 10:00:27.777876 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 10:00:27.782181 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 10:00:27.797085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:00:27.812431 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 10:00:27.814009 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 10:00:27.818072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:00:27.820083 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 10:00:27.826535 kernel: loop1: detected capacity change from 0 to 201592 May 13 10:00:27.849582 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 13 10:00:27.849600 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 13 10:00:27.854537 kernel: loop2: detected capacity change from 0 to 107312 May 13 10:00:27.856202 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:00:27.888031 kernel: loop3: detected capacity change from 0 to 138376 May 13 10:00:27.894551 kernel: loop4: detected capacity change from 0 to 201592 May 13 10:00:27.899533 kernel: loop5: detected capacity change from 0 to 107312 May 13 10:00:27.902256 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 10:00:27.902665 (sd-merge)[1227]: Merged extensions into '/usr'. May 13 10:00:27.907635 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... May 13 10:00:27.907648 systemd[1]: Reloading... May 13 10:00:27.974317 zram_generator::config[1253]: No configuration found. May 13 10:00:28.019271 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 10:00:28.044787 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:00:28.107379 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 10:00:28.107692 systemd[1]: Reloading finished in 199 ms. May 13 10:00:28.147304 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 10:00:28.148617 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 10:00:28.164833 systemd[1]: Starting ensure-sysext.service... May 13 10:00:28.166474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:00:28.181128 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... May 13 10:00:28.181141 systemd[1]: Reloading... May 13 10:00:28.182815 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 10:00:28.182977 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 10:00:28.183324 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 10:00:28.183541 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 10:00:28.184141 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 10:00:28.184341 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. May 13 10:00:28.184387 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. May 13 10:00:28.186820 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:00:28.186831 systemd-tmpfiles[1288]: Skipping /boot May 13 10:00:28.195200 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:00:28.195217 systemd-tmpfiles[1288]: Skipping /boot May 13 10:00:28.219530 zram_generator::config[1315]: No configuration found. May 13 10:00:28.287816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:00:28.350816 systemd[1]: Reloading finished in 169 ms. May 13 10:00:28.370005 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 10:00:28.375744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:00:28.390605 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:00:28.392478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 10:00:28.394312 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 10:00:28.396575 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:00:28.401730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:00:28.404694 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 10:00:28.414075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:00:28.418740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:00:28.421155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:00:28.423263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:00:28.424357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:00:28.424467 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:00:28.429437 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 10:00:28.436764 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 10:00:28.438611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:00:28.438819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:00:28.440665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:00:28.440843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:00:28.442206 systemd-udevd[1356]: Using default interface naming scheme 'v255'. May 13 10:00:28.443090 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:00:28.443278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:00:28.453678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:00:28.455328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:00:28.458684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:00:28.460747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:00:28.461704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:00:28.461859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:00:28.463783 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 10:00:28.465326 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 10:00:28.466783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 10:00:28.468307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:00:28.473157 augenrules[1395]: No rules May 13 10:00:28.477442 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:00:28.477693 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:00:28.482451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:00:28.482636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:00:28.483825 systemd[1]: Finished ensure-sysext.service. May 13 10:00:28.484661 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:00:28.484798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:00:28.489663 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 10:00:28.490783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:00:28.490989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:00:28.492112 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 10:00:28.496356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:00:28.499675 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:00:28.500493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:00:28.500559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:00:28.502248 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:00:28.503090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:00:28.503175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:00:28.505771 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 10:00:28.507528 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 10:00:28.548697 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:00:28.548906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:00:28.550982 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 10:00:28.597604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:00:28.601723 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 10:00:28.630156 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 10:00:28.657290 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 10:00:28.658372 systemd[1]: Reached target time-set.target - System Time Set. May 13 10:00:28.664816 systemd-resolved[1354]: Positive Trust Anchors: May 13 10:00:28.664835 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:00:28.664867 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:00:28.671084 systemd-resolved[1354]: Defaulting to hostname 'linux'. May 13 10:00:28.671599 systemd-networkd[1434]: lo: Link UP May 13 10:00:28.671603 systemd-networkd[1434]: lo: Gained carrier May 13 10:00:28.672364 systemd-networkd[1434]: Enumeration completed May 13 10:00:28.672368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:00:28.672773 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:00:28.672781 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:00:28.673236 systemd-networkd[1434]: eth0: Link UP May 13 10:00:28.673357 systemd-networkd[1434]: eth0: Gained carrier May 13 10:00:28.673376 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:00:28.673613 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:00:28.674547 systemd[1]: Reached target network.target - Network. May 13 10:00:28.675239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:00:28.677691 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:00:28.678630 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 10:00:28.679704 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 10:00:28.680911 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 10:00:28.682025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 10:00:28.683121 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 10:00:28.684029 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 10:00:28.684061 systemd[1]: Reached target paths.target - Path Units. May 13 10:00:28.684821 systemd[1]: Reached target timers.target - Timer Units. May 13 10:00:28.687582 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 10:00:28.689799 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 10:00:28.690551 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:00:28.692670 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 10:00:28.693738 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. May 13 10:00:28.694078 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 10:00:28.695250 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 10:00:28.696169 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 10:00:28.696223 systemd-timesyncd[1435]: Initial clock synchronization to Tue 2025-05-13 10:00:28.353432 UTC. May 13 10:00:28.699977 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 10:00:28.701381 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 10:00:28.706674 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 10:00:28.710208 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 10:00:28.711647 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 10:00:28.712866 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:00:28.713681 systemd[1]: Reached target basic.target - Basic System. May 13 10:00:28.715715 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 10:00:28.715746 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 10:00:28.720550 systemd[1]: Starting containerd.service - containerd container runtime... May 13 10:00:28.724780 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 10:00:28.726750 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 10:00:28.729713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 10:00:28.731979 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 10:00:28.732721 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 10:00:28.742493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 10:00:28.745735 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 10:00:28.747081 jq[1471]: false May 13 10:00:28.747427 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 10:00:28.749181 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 10:00:28.753316 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 10:00:28.755091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 10:00:28.755470 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 10:00:28.757841 systemd[1]: Starting update-engine.service - Update Engine... May 13 10:00:28.761532 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 10:00:28.765817 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 10:00:28.767921 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 10:00:28.769096 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 10:00:28.769972 systemd[1]: motdgen.service: Deactivated successfully. May 13 10:00:28.770089 jq[1482]: true May 13 10:00:28.771624 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 10:00:28.773109 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 10:00:28.773276 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 10:00:28.775275 extend-filesystems[1472]: Found loop3 May 13 10:00:28.777586 extend-filesystems[1472]: Found loop4 May 13 10:00:28.777586 extend-filesystems[1472]: Found loop5 May 13 10:00:28.777586 extend-filesystems[1472]: Found vda May 13 10:00:28.777586 extend-filesystems[1472]: Found vda1 May 13 10:00:28.777586 extend-filesystems[1472]: Found vda2 May 13 10:00:28.777586 extend-filesystems[1472]: Found vda3 May 13 10:00:28.777586 extend-filesystems[1472]: Found usr May 13 10:00:28.777586 extend-filesystems[1472]: Found vda4 May 13 10:00:28.777586 extend-filesystems[1472]: Found vda6 May 13 10:00:28.777586 extend-filesystems[1472]: Found vda7 May 13 10:00:28.777586 extend-filesystems[1472]: Found vda9 May 13 10:00:28.777586 extend-filesystems[1472]: Checking size of /dev/vda9 May 13 10:00:28.799608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:00:28.799837 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 10:00:28.806978 extend-filesystems[1472]: Resized partition /dev/vda9 May 13 10:00:28.823766 jq[1492]: true May 13 10:00:28.827843 tar[1489]: linux-arm64/LICENSE May 13 10:00:28.828410 tar[1489]: linux-arm64/helm May 13 10:00:28.835377 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 10:00:28.840170 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (Power Button) May 13 10:00:28.840389 systemd-logind[1479]: New seat seat0. May 13 10:00:28.841630 systemd[1]: Started systemd-logind.service - User Login Management. May 13 10:00:28.850669 extend-filesystems[1508]: resize2fs 1.47.2 (1-Jan-2025) May 13 10:00:28.859532 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 10:00:28.863957 dbus-daemon[1469]: [system] SELinux support is enabled May 13 10:00:28.864262 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 10:00:28.869237 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 10:00:28.869560 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 10:00:28.871693 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 10:00:28.871721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 10:00:28.876425 dbus-daemon[1469]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 10:00:28.894935 update_engine[1481]: I20250513 10:00:28.894726 1481 main.cc:92] Flatcar Update Engine starting May 13 10:00:28.899529 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 10:00:28.907557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:00:28.913996 extend-filesystems[1508]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 10:00:28.913996 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 10:00:28.913996 extend-filesystems[1508]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 10:00:28.918525 extend-filesystems[1472]: Resized filesystem in /dev/vda9 May 13 10:00:28.916561 systemd[1]: Started update-engine.service - Update Engine. May 13 10:00:28.919446 update_engine[1481]: I20250513 10:00:28.916481 1481 update_check_scheduler.cc:74] Next update check in 5m57s May 13 10:00:28.920627 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 10:00:28.922300 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 10:00:28.922502 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 10:00:28.931662 bash[1531]: Updated "/home/core/.ssh/authorized_keys" May 13 10:00:28.935559 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 10:00:28.937148 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 10:00:28.983924 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 10:00:29.073183 containerd[1493]: time="2025-05-13T10:00:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 10:00:29.077965 containerd[1493]: time="2025-05-13T10:00:29.077925252Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 10:00:29.087899 containerd[1493]: time="2025-05-13T10:00:29.087858731Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.806µs" May 13 10:00:29.087899 containerd[1493]: time="2025-05-13T10:00:29.087889475Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 10:00:29.087988 containerd[1493]: time="2025-05-13T10:00:29.087915434Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 10:00:29.088129 containerd[1493]: time="2025-05-13T10:00:29.088097525Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 10:00:29.088162 containerd[1493]: time="2025-05-13T10:00:29.088129226Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 10:00:29.088162 containerd[1493]: time="2025-05-13T10:00:29.088155912Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:00:29.088232 containerd[1493]: time="2025-05-13T10:00:29.088213687Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:00:29.088255 containerd[1493]: time="2025-05-13T10:00:29.088230341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:00:29.088519 containerd[1493]: time="2025-05-13T10:00:29.088485407Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:00:29.088519 containerd[1493]: time="2025-05-13T10:00:29.088516458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:00:29.088561 containerd[1493]: time="2025-05-13T10:00:29.088529935Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:00:29.088561 containerd[1493]: time="2025-05-13T10:00:29.088538319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 10:00:29.088685 containerd[1493]: time="2025-05-13T10:00:29.088615237Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 10:00:29.088945 containerd[1493]: time="2025-05-13T10:00:29.088919885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:00:29.088972 containerd[1493]: time="2025-05-13T10:00:29.088957099Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:00:29.088972 containerd[1493]: time="2025-05-13T10:00:29.088968432Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 10:00:29.089017 containerd[1493]: time="2025-05-13T10:00:29.088999023Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 10:00:29.089824 containerd[1493]: time="2025-05-13T10:00:29.089777238Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 10:00:29.089918 containerd[1493]: time="2025-05-13T10:00:29.089892711Z" level=info msg="metadata content store policy set" policy=shared May 13 10:00:29.093329 containerd[1493]: time="2025-05-13T10:00:29.093290768Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 10:00:29.093385 containerd[1493]: time="2025-05-13T10:00:29.093342417Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 10:00:29.093385 containerd[1493]: time="2025-05-13T10:00:29.093356660Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 10:00:29.093385 containerd[1493]: time="2025-05-13T10:00:29.093367342Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 10:00:29.093385 containerd[1493]: time="2025-05-13T10:00:29.093379057Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093389471Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093400038Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093411142Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093421594Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093430974Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093439818Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 10:00:29.093462 containerd[1493]: time="2025-05-13T10:00:29.093450883Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 10:00:29.093593 containerd[1493]: time="2025-05-13T10:00:29.093574013Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 10:00:29.093611 containerd[1493]: time="2025-05-13T10:00:29.093594190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 10:00:29.093628 containerd[1493]: time="2025-05-13T10:00:29.093618962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 10:00:29.093658 containerd[1493]: time="2025-05-13T10:00:29.093629529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 10:00:29.093658 containerd[1493]: time="2025-05-13T10:00:29.093648595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 10:00:29.093695 containerd[1493]: time="2025-05-13T10:00:29.093661422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 10:00:29.093695 containerd[1493]: time="2025-05-13T10:00:29.093672257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 10:00:29.093695 containerd[1493]: time="2025-05-13T10:00:29.093681637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 10:00:29.093745 containerd[1493]: time="2025-05-13T10:00:29.093702197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 10:00:29.093745 containerd[1493]: time="2025-05-13T10:00:29.093716401Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 10:00:29.093745 containerd[1493]: time="2025-05-13T10:00:29.093726739Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 10:00:29.093915 containerd[1493]: time="2025-05-13T10:00:29.093893324Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 10:00:29.093915 containerd[1493]: time="2025-05-13T10:00:29.093914650Z" level=info msg="Start snapshots syncer" May 13 10:00:29.093960 containerd[1493]: time="2025-05-13T10:00:29.093933104Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 10:00:29.094159 containerd[1493]: time="2025-05-13T10:00:29.094121551Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 10:00:29.094255 containerd[1493]: time="2025-05-13T10:00:29.094175305Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 10:00:29.094255 containerd[1493]: time="2025-05-13T10:00:29.094246748Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 10:00:29.094365 containerd[1493]: time="2025-05-13T10:00:29.094343116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 10:00:29.094395 containerd[1493]: time="2025-05-13T10:00:29.094371907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 10:00:29.094395 containerd[1493]: time="2025-05-13T10:00:29.094383623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 10:00:29.094427 containerd[1493]: time="2025-05-13T10:00:29.094393386Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 10:00:29.094427 containerd[1493]: time="2025-05-13T10:00:29.094404872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 10:00:29.094427 containerd[1493]: time="2025-05-13T10:00:29.094415286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 10:00:29.094487 containerd[1493]: time="2025-05-13T10:00:29.094426160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 10:00:29.094487 containerd[1493]: time="2025-05-13T10:00:29.094448787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 10:00:29.094487 containerd[1493]: time="2025-05-13T10:00:29.094461383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 10:00:29.094487 containerd[1493]: time="2025-05-13T10:00:29.094478689Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 10:00:29.094565 containerd[1493]: time="2025-05-13T10:00:29.094507251Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:00:29.094565 containerd[1493]: time="2025-05-13T10:00:29.094531448Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:00:29.094565 containerd[1493]: time="2025-05-13T10:00:29.094539297Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:00:29.094565 containerd[1493]: time="2025-05-13T10:00:29.094547643Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:00:29.094639 containerd[1493]: time="2025-05-13T10:00:29.094566863Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 10:00:29.094639 containerd[1493]: time="2025-05-13T10:00:29.094576550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 10:00:29.094639 containerd[1493]: time="2025-05-13T10:00:29.094586389Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 10:00:29.094690 containerd[1493]: time="2025-05-13T10:00:29.094669127Z" level=info msg="runtime interface created" May 13 10:00:29.094690 containerd[1493]: time="2025-05-13T10:00:29.094675176Z" level=info msg="created NRI interface" May 13 10:00:29.094722 containerd[1493]: time="2025-05-13T10:00:29.094689610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 10:00:29.094722 containerd[1493]: time="2025-05-13T10:00:29.094701594Z" level=info msg="Connect containerd service" May 13 10:00:29.094753 containerd[1493]: time="2025-05-13T10:00:29.094729467Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 10:00:29.095672 containerd[1493]: time="2025-05-13T10:00:29.095642987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 10:00:29.208556 containerd[1493]: time="2025-05-13T10:00:29.208448748Z" level=info msg="Start subscribing containerd event" May 13 10:00:29.208556 containerd[1493]: time="2025-05-13T10:00:29.208562612Z" level=info msg="Start recovering state" May 13 10:00:29.208688 containerd[1493]: time="2025-05-13T10:00:29.208660129Z" level=info msg="Start event monitor" May 13 10:00:29.208688 containerd[1493]: time="2025-05-13T10:00:29.208684173Z" level=info msg="Start cni network conf syncer for default" May 13 10:00:29.208721 containerd[1493]: time="2025-05-13T10:00:29.208692825Z" level=info msg="Start streaming server" May 13 10:00:29.208721 containerd[1493]: time="2025-05-13T10:00:29.208701249Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 10:00:29.208721 containerd[1493]: time="2025-05-13T10:00:29.208719550Z" level=info msg="runtime interface starting up..." May 13 10:00:29.208771 containerd[1493]: time="2025-05-13T10:00:29.208726977Z" level=info msg="starting plugins..." May 13 10:00:29.208771 containerd[1493]: time="2025-05-13T10:00:29.208742483Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 10:00:29.208802 containerd[1493]: time="2025-05-13T10:00:29.208765340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 10:00:29.208820 containerd[1493]: time="2025-05-13T10:00:29.208808949Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 10:00:29.208951 systemd[1]: Started containerd.service - containerd container runtime. May 13 10:00:29.210439 containerd[1493]: time="2025-05-13T10:00:29.210405810Z" level=info msg="containerd successfully booted in 0.137581s" May 13 10:00:29.277237 tar[1489]: linux-arm64/README.md May 13 10:00:29.294522 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 10:00:29.421496 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 10:00:29.438888 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 10:00:29.441828 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 10:00:29.459224 systemd[1]: issuegen.service: Deactivated successfully. May 13 10:00:29.459438 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 10:00:29.461963 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 10:00:29.478848 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 10:00:29.481011 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 10:00:29.484720 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 10:00:29.485655 systemd[1]: Reached target getty.target - Login Prompts. May 13 10:00:30.085625 systemd-networkd[1434]: eth0: Gained IPv6LL May 13 10:00:30.087623 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 10:00:30.089403 systemd[1]: Reached target network-online.target - Network is Online. May 13 10:00:30.091531 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 10:00:30.093461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:00:30.106937 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 10:00:30.120210 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 10:00:30.120430 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 10:00:30.121993 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 10:00:30.125557 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 10:00:30.626028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:30.627730 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 10:00:30.630869 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:00:30.632735 systemd[1]: Startup finished in 2.076s (kernel) + 5.513s (initrd) + 3.547s (userspace) = 11.138s. May 13 10:00:31.004557 kubelet[1607]: E0513 10:00:31.004437 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:00:31.006996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:00:31.007218 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:00:31.007616 systemd[1]: kubelet.service: Consumed 782ms CPU time, 247.5M memory peak. May 13 10:00:33.929879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 10:00:33.931338 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:44338.service - OpenSSH per-connection server daemon (10.0.0.1:44338). May 13 10:00:34.012367 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 44338 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:34.014196 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:34.019960 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 10:00:34.021046 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 10:00:34.026365 systemd-logind[1479]: New session 1 of user core. May 13 10:00:34.047285 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 10:00:34.049725 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 10:00:34.060313 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 10:00:34.062328 systemd-logind[1479]: New session c1 of user core. May 13 10:00:34.188865 systemd[1624]: Queued start job for default target default.target. May 13 10:00:34.206530 systemd[1624]: Created slice app.slice - User Application Slice. May 13 10:00:34.206558 systemd[1624]: Reached target paths.target - Paths. May 13 10:00:34.206593 systemd[1624]: Reached target timers.target - Timers. May 13 10:00:34.207742 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 10:00:34.216710 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 10:00:34.216765 systemd[1624]: Reached target sockets.target - Sockets. May 13 10:00:34.216803 systemd[1624]: Reached target basic.target - Basic System. May 13 10:00:34.216830 systemd[1624]: Reached target default.target - Main User Target. May 13 10:00:34.216854 systemd[1624]: Startup finished in 149ms. May 13 10:00:34.217037 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 10:00:34.218455 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 10:00:34.274036 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:44346.service - OpenSSH per-connection server daemon (10.0.0.1:44346). May 13 10:00:34.326027 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 44346 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:34.327312 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:34.331442 systemd-logind[1479]: New session 2 of user core. May 13 10:00:34.348727 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 10:00:34.397949 sshd[1637]: Connection closed by 10.0.0.1 port 44346 May 13 10:00:34.398205 sshd-session[1635]: pam_unix(sshd:session): session closed for user core May 13 10:00:34.411327 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:44346.service: Deactivated successfully. May 13 10:00:34.413674 systemd[1]: session-2.scope: Deactivated successfully. May 13 10:00:34.414213 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. May 13 10:00:34.416280 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:44360.service - OpenSSH per-connection server daemon (10.0.0.1:44360). May 13 10:00:34.417475 systemd-logind[1479]: Removed session 2. May 13 10:00:34.470124 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 44360 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:34.471737 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:34.475555 systemd-logind[1479]: New session 3 of user core. May 13 10:00:34.484721 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 10:00:34.531179 sshd[1646]: Connection closed by 10.0.0.1 port 44360 May 13 10:00:34.531443 sshd-session[1643]: pam_unix(sshd:session): session closed for user core May 13 10:00:34.542444 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:44360.service: Deactivated successfully. May 13 10:00:34.545585 systemd[1]: session-3.scope: Deactivated successfully. May 13 10:00:34.546271 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. May 13 10:00:34.548174 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:44366.service - OpenSSH per-connection server daemon (10.0.0.1:44366). May 13 10:00:34.548794 systemd-logind[1479]: Removed session 3. May 13 10:00:34.598684 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 44366 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:34.599755 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:34.603806 systemd-logind[1479]: New session 4 of user core. May 13 10:00:34.615641 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 10:00:34.664678 sshd[1654]: Connection closed by 10.0.0.1 port 44366 May 13 10:00:34.664997 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 13 10:00:34.682291 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:44366.service: Deactivated successfully. May 13 10:00:34.683506 systemd[1]: session-4.scope: Deactivated successfully. May 13 10:00:34.685664 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. May 13 10:00:34.687538 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:44368.service - OpenSSH per-connection server daemon (10.0.0.1:44368). May 13 10:00:34.689682 systemd-logind[1479]: Removed session 4. May 13 10:00:34.738456 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 44368 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:34.739481 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:34.743115 systemd-logind[1479]: New session 5 of user core. May 13 10:00:34.753718 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 10:00:34.808186 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 10:00:34.810268 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:00:34.824069 sudo[1663]: pam_unix(sudo:session): session closed for user root May 13 10:00:34.825466 sshd[1662]: Connection closed by 10.0.0.1 port 44368 May 13 10:00:34.825943 sshd-session[1660]: pam_unix(sshd:session): session closed for user core May 13 10:00:34.834199 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:44368.service: Deactivated successfully. May 13 10:00:34.836703 systemd[1]: session-5.scope: Deactivated successfully. May 13 10:00:34.837248 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. May 13 10:00:34.839479 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:44370.service - OpenSSH per-connection server daemon (10.0.0.1:44370). May 13 10:00:34.840200 systemd-logind[1479]: Removed session 5. May 13 10:00:34.890769 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 44370 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:34.891897 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:34.895565 systemd-logind[1479]: New session 6 of user core. May 13 10:00:34.903640 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 10:00:34.951926 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 10:00:34.952172 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:00:34.997014 sudo[1673]: pam_unix(sudo:session): session closed for user root May 13 10:00:35.001664 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 10:00:35.001910 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:00:35.009505 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:00:35.043503 augenrules[1695]: No rules May 13 10:00:35.044571 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:00:35.045580 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:00:35.046811 sudo[1672]: pam_unix(sudo:session): session closed for user root May 13 10:00:35.047794 sshd[1671]: Connection closed by 10.0.0.1 port 44370 May 13 10:00:35.048144 sshd-session[1669]: pam_unix(sshd:session): session closed for user core May 13 10:00:35.058181 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:44370.service: Deactivated successfully. May 13 10:00:35.060066 systemd[1]: session-6.scope: Deactivated successfully. May 13 10:00:35.062657 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. May 13 10:00:35.064753 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:44382.service - OpenSSH per-connection server daemon (10.0.0.1:44382). May 13 10:00:35.065327 systemd-logind[1479]: Removed session 6. May 13 10:00:35.119960 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 44382 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:00:35.121033 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:00:35.124980 systemd-logind[1479]: New session 7 of user core. May 13 10:00:35.136633 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 10:00:35.185093 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 10:00:35.185622 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:00:35.534346 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 10:00:35.544840 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 10:00:35.793012 dockerd[1727]: time="2025-05-13T10:00:35.792900091Z" level=info msg="Starting up" May 13 10:00:35.796751 dockerd[1727]: time="2025-05-13T10:00:35.796505936Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 10:00:35.921196 dockerd[1727]: time="2025-05-13T10:00:35.921146220Z" level=info msg="Loading containers: start." May 13 10:00:35.929619 kernel: Initializing XFRM netlink socket May 13 10:00:36.120089 systemd-networkd[1434]: docker0: Link UP May 13 10:00:36.123252 dockerd[1727]: time="2025-05-13T10:00:36.123207389Z" level=info msg="Loading containers: done." May 13 10:00:36.135777 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3414929546-merged.mount: Deactivated successfully. May 13 10:00:36.136772 dockerd[1727]: time="2025-05-13T10:00:36.136722294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 10:00:36.136823 dockerd[1727]: time="2025-05-13T10:00:36.136807438Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 10:00:36.136920 dockerd[1727]: time="2025-05-13T10:00:36.136904300Z" level=info msg="Initializing buildkit" May 13 10:00:36.156483 dockerd[1727]: time="2025-05-13T10:00:36.156444052Z" level=info msg="Completed buildkit initialization" May 13 10:00:36.161087 dockerd[1727]: time="2025-05-13T10:00:36.161050130Z" level=info msg="Daemon has completed initialization" May 13 10:00:36.161108 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 10:00:36.161195 dockerd[1727]: time="2025-05-13T10:00:36.161099643Z" level=info msg="API listen on /run/docker.sock" May 13 10:00:36.939989 containerd[1493]: time="2025-05-13T10:00:36.939948593Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 10:00:37.647020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961857355.mount: Deactivated successfully. May 13 10:00:38.695428 containerd[1493]: time="2025-05-13T10:00:38.695364933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:38.696987 containerd[1493]: time="2025-05-13T10:00:38.696959964Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 10:00:38.697717 containerd[1493]: time="2025-05-13T10:00:38.697670929Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:38.700434 containerd[1493]: time="2025-05-13T10:00:38.700404506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:38.701274 containerd[1493]: time="2025-05-13T10:00:38.701246837Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.761261599s" May 13 10:00:38.701442 containerd[1493]: time="2025-05-13T10:00:38.701329123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 10:00:38.702005 containerd[1493]: time="2025-05-13T10:00:38.701927043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 10:00:39.905905 containerd[1493]: time="2025-05-13T10:00:39.905855296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:39.906552 containerd[1493]: time="2025-05-13T10:00:39.906522057Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 10:00:39.907451 containerd[1493]: time="2025-05-13T10:00:39.907386802Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:39.909739 containerd[1493]: time="2025-05-13T10:00:39.909687343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:39.910586 containerd[1493]: time="2025-05-13T10:00:39.910549161Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.208445132s" May 13 10:00:39.910649 containerd[1493]: time="2025-05-13T10:00:39.910595315Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 10:00:39.911366 containerd[1493]: time="2025-05-13T10:00:39.911189543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 10:00:41.117181 containerd[1493]: time="2025-05-13T10:00:41.117136239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:41.117634 containerd[1493]: time="2025-05-13T10:00:41.117588186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 10:00:41.118465 containerd[1493]: time="2025-05-13T10:00:41.118419749Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:41.120784 containerd[1493]: time="2025-05-13T10:00:41.120757406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:41.121775 containerd[1493]: time="2025-05-13T10:00:41.121735891Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.210514524s" May 13 10:00:41.121775 containerd[1493]: time="2025-05-13T10:00:41.121768289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 10:00:41.122273 containerd[1493]: time="2025-05-13T10:00:41.122241412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 10:00:41.257503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 10:00:41.258836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:00:41.382544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:41.385772 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:00:41.467850 kubelet[2007]: E0513 10:00:41.467760 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:00:41.471327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:00:41.471472 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:00:41.472009 systemd[1]: kubelet.service: Consumed 186ms CPU time, 102.2M memory peak. May 13 10:00:42.098328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629364818.mount: Deactivated successfully. May 13 10:00:42.414194 containerd[1493]: time="2025-05-13T10:00:42.414082129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:42.414905 containerd[1493]: time="2025-05-13T10:00:42.414847507Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 10:00:42.415481 containerd[1493]: time="2025-05-13T10:00:42.415434919Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:42.417411 containerd[1493]: time="2025-05-13T10:00:42.417380202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:42.418390 containerd[1493]: time="2025-05-13T10:00:42.418320371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.29604913s" May 13 10:00:42.418390 containerd[1493]: time="2025-05-13T10:00:42.418364793Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 10:00:42.419033 containerd[1493]: time="2025-05-13T10:00:42.418832674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 10:00:42.986234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583644923.mount: Deactivated successfully. May 13 10:00:43.669346 containerd[1493]: time="2025-05-13T10:00:43.669300286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:43.670312 containerd[1493]: time="2025-05-13T10:00:43.670283190Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 10:00:43.670968 containerd[1493]: time="2025-05-13T10:00:43.670933069Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:43.673950 containerd[1493]: time="2025-05-13T10:00:43.673922829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:43.675524 containerd[1493]: time="2025-05-13T10:00:43.675469903Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.25659927s" May 13 10:00:43.675715 containerd[1493]: time="2025-05-13T10:00:43.675501969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 10:00:43.676175 containerd[1493]: time="2025-05-13T10:00:43.676149106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 10:00:44.092740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679393040.mount: Deactivated successfully. May 13 10:00:44.097012 containerd[1493]: time="2025-05-13T10:00:44.096976354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:00:44.097303 containerd[1493]: time="2025-05-13T10:00:44.097263564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 10:00:44.098238 containerd[1493]: time="2025-05-13T10:00:44.098193796Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:00:44.102274 containerd[1493]: time="2025-05-13T10:00:44.101971038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:00:44.102951 containerd[1493]: time="2025-05-13T10:00:44.102873711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 426.690505ms" May 13 10:00:44.102951 containerd[1493]: time="2025-05-13T10:00:44.102905009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 10:00:44.103436 containerd[1493]: time="2025-05-13T10:00:44.103400330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 10:00:44.584304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3151056573.mount: Deactivated successfully. May 13 10:00:46.150295 containerd[1493]: time="2025-05-13T10:00:46.150245563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:46.152662 containerd[1493]: time="2025-05-13T10:00:46.152619731Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 10:00:46.153557 containerd[1493]: time="2025-05-13T10:00:46.153520965Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:46.158945 containerd[1493]: time="2025-05-13T10:00:46.158885760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:00:46.159937 containerd[1493]: time="2025-05-13T10:00:46.159907817Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.056410084s" May 13 10:00:46.159982 containerd[1493]: time="2025-05-13T10:00:46.159941707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 10:00:50.991477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:50.991638 systemd[1]: kubelet.service: Consumed 186ms CPU time, 102.2M memory peak. May 13 10:00:50.993542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:00:51.012894 systemd[1]: Reload requested from client PID 2163 ('systemctl') (unit session-7.scope)... May 13 10:00:51.012912 systemd[1]: Reloading... May 13 10:00:51.081540 zram_generator::config[2206]: No configuration found. May 13 10:00:51.155625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:00:51.240825 systemd[1]: Reloading finished in 227 ms. May 13 10:00:51.297942 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 10:00:51.298023 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 10:00:51.298238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:51.298280 systemd[1]: kubelet.service: Consumed 83ms CPU time, 90.1M memory peak. May 13 10:00:51.299656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:00:51.404222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:51.407411 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:00:51.441715 kubelet[2251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:00:51.441715 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 10:00:51.441715 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:00:51.442013 kubelet[2251]: I0513 10:00:51.441790 2251 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:00:53.058543 kubelet[2251]: I0513 10:00:53.058259 2251 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 10:00:53.058543 kubelet[2251]: I0513 10:00:53.058307 2251 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:00:53.058873 kubelet[2251]: I0513 10:00:53.058730 2251 server.go:954] "Client rotation is on, will bootstrap in background" May 13 10:00:53.107683 kubelet[2251]: E0513 10:00:53.107631 2251 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 13 10:00:53.108412 kubelet[2251]: I0513 10:00:53.108387 2251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:00:53.114641 kubelet[2251]: I0513 10:00:53.114620 2251 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 10:00:53.117908 kubelet[2251]: I0513 10:00:53.117877 2251 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:00:53.118486 kubelet[2251]: I0513 10:00:53.118450 2251 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:00:53.118662 kubelet[2251]: I0513 10:00:53.118491 2251 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 10:00:53.118791 kubelet[2251]: I0513 10:00:53.118730 2251 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:00:53.118791 kubelet[2251]: I0513 10:00:53.118739 2251 container_manager_linux.go:304] "Creating device plugin manager" May 13 10:00:53.118932 kubelet[2251]: I0513 10:00:53.118916 2251 state_mem.go:36] "Initialized new in-memory state store" May 13 10:00:53.124900 kubelet[2251]: I0513 10:00:53.124873 2251 kubelet.go:446] "Attempting to sync node with API server" May 13 10:00:53.124900 kubelet[2251]: I0513 10:00:53.124900 2251 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:00:53.124962 kubelet[2251]: I0513 10:00:53.124925 2251 kubelet.go:352] "Adding apiserver pod source" May 13 10:00:53.124962 kubelet[2251]: I0513 10:00:53.124935 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:00:53.129324 kubelet[2251]: W0513 10:00:53.129272 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 10:00:53.129481 kubelet[2251]: E0513 10:00:53.129457 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 13 10:00:53.129761 kubelet[2251]: W0513 10:00:53.129647 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 10:00:53.129761 kubelet[2251]: E0513 10:00:53.129701 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 13 10:00:53.130468 kubelet[2251]: I0513 10:00:53.130408 2251 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:00:53.131100 kubelet[2251]: I0513 10:00:53.131072 2251 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:00:53.131268 kubelet[2251]: W0513 10:00:53.131254 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 10:00:53.133199 kubelet[2251]: I0513 10:00:53.133176 2251 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 10:00:53.133529 kubelet[2251]: I0513 10:00:53.133486 2251 server.go:1287] "Started kubelet" May 13 10:00:53.135101 kubelet[2251]: I0513 10:00:53.135028 2251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:00:53.135467 kubelet[2251]: I0513 10:00:53.135105 2251 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:00:53.135584 kubelet[2251]: I0513 10:00:53.135570 2251 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:00:53.137364 kubelet[2251]: I0513 10:00:53.137335 2251 server.go:490] "Adding debug handlers to kubelet server" May 13 10:00:53.138056 kubelet[2251]: I0513 10:00:53.137909 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:00:53.139681 kubelet[2251]: I0513 10:00:53.139657 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 10:00:53.141572 kubelet[2251]: I0513 10:00:53.141440 2251 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 10:00:53.141850 kubelet[2251]: E0513 10:00:53.141824 2251 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:00:53.142377 kubelet[2251]: I0513 10:00:53.142345 2251 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 10:00:53.142432 kubelet[2251]: I0513 10:00:53.142406 2251 reconciler.go:26] "Reconciler: start to sync state" May 13 10:00:53.143337 kubelet[2251]: W0513 10:00:53.142695 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 10:00:53.143337 kubelet[2251]: E0513 10:00:53.142733 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 13 10:00:53.143337 kubelet[2251]: I0513 10:00:53.142886 2251 factory.go:221] Registration of the systemd container factory successfully May 13 10:00:53.143337 kubelet[2251]: I0513 10:00:53.142952 2251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:00:53.143904 kubelet[2251]: E0513 10:00:53.143876 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" May 13 10:00:53.144438 kubelet[2251]: I0513 10:00:53.144307 2251 factory.go:221] Registration of the containerd container factory successfully May 13 10:00:53.144658 kubelet[2251]: E0513 10:00:53.144636 2251 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:00:53.145420 kubelet[2251]: E0513 10:00:53.144865 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f0de3f349965e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 10:00:53.133465182 +0000 UTC m=+1.723226836,LastTimestamp:2025-05-13 10:00:53.133465182 +0000 UTC m=+1.723226836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 10:00:53.154897 kubelet[2251]: I0513 10:00:53.154880 2251 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 10:00:53.155001 kubelet[2251]: I0513 10:00:53.154989 2251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 10:00:53.155058 kubelet[2251]: I0513 10:00:53.155049 2251 state_mem.go:36] "Initialized new in-memory state store" May 13 10:00:53.160314 kubelet[2251]: I0513 10:00:53.160267 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:00:53.161326 kubelet[2251]: I0513 10:00:53.161306 2251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:00:53.161505 kubelet[2251]: I0513 10:00:53.161446 2251 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 10:00:53.161636 kubelet[2251]: I0513 10:00:53.161622 2251 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 10:00:53.162424 kubelet[2251]: I0513 10:00:53.162338 2251 kubelet.go:2388] "Starting kubelet main sync loop" May 13 10:00:53.162424 kubelet[2251]: E0513 10:00:53.162393 2251 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:00:53.162424 kubelet[2251]: W0513 10:00:53.162136 2251 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 10:00:53.162561 kubelet[2251]: E0513 10:00:53.162432 2251 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 13 10:00:53.223032 kubelet[2251]: I0513 10:00:53.222991 2251 policy_none.go:49] "None policy: Start" May 13 10:00:53.223202 kubelet[2251]: I0513 10:00:53.223179 2251 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 10:00:53.223305 kubelet[2251]: I0513 10:00:53.223295 2251 state_mem.go:35] "Initializing new in-memory state store" May 13 10:00:53.229669 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 10:00:53.242411 kubelet[2251]: E0513 10:00:53.242378 2251 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:00:53.249769 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 10:00:53.252814 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 10:00:53.262698 kubelet[2251]: E0513 10:00:53.262655 2251 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 10:00:53.265336 kubelet[2251]: I0513 10:00:53.265296 2251 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:00:53.265692 kubelet[2251]: I0513 10:00:53.265538 2251 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 10:00:53.265692 kubelet[2251]: I0513 10:00:53.265556 2251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:00:53.265787 kubelet[2251]: I0513 10:00:53.265770 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:00:53.267325 kubelet[2251]: E0513 10:00:53.267293 2251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 10:00:53.267399 kubelet[2251]: E0513 10:00:53.267344 2251 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 10:00:53.344894 kubelet[2251]: E0513 10:00:53.344770 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" May 13 10:00:53.367055 kubelet[2251]: I0513 10:00:53.367014 2251 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:00:53.367455 kubelet[2251]: E0513 10:00:53.367414 2251 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 13 10:00:53.470660 systemd[1]: Created slice kubepods-burstable-pod10996eeb8142ab621b95f56d96fbe95b.slice - libcontainer container kubepods-burstable-pod10996eeb8142ab621b95f56d96fbe95b.slice. May 13 10:00:53.493775 kubelet[2251]: E0513 10:00:53.493694 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:53.496247 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 10:00:53.497877 kubelet[2251]: E0513 10:00:53.497856 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:53.500086 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 10:00:53.501471 kubelet[2251]: E0513 10:00:53.501443 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:53.545695 kubelet[2251]: I0513 10:00:53.545654 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10996eeb8142ab621b95f56d96fbe95b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10996eeb8142ab621b95f56d96fbe95b\") " pod="kube-system/kube-apiserver-localhost" May 13 10:00:53.545695 kubelet[2251]: I0513 10:00:53.545694 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:53.545773 kubelet[2251]: I0513 10:00:53.545732 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:53.545773 kubelet[2251]: I0513 10:00:53.545751 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10996eeb8142ab621b95f56d96fbe95b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10996eeb8142ab621b95f56d96fbe95b\") " pod="kube-system/kube-apiserver-localhost" May 13 10:00:53.545773 kubelet[2251]: I0513 10:00:53.545771 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:53.545844 kubelet[2251]: I0513 10:00:53.545786 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:53.545844 kubelet[2251]: I0513 10:00:53.545801 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:53.545844 kubelet[2251]: I0513 10:00:53.545822 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 10:00:53.545844 kubelet[2251]: I0513 10:00:53.545841 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10996eeb8142ab621b95f56d96fbe95b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10996eeb8142ab621b95f56d96fbe95b\") " pod="kube-system/kube-apiserver-localhost" May 13 10:00:53.568685 kubelet[2251]: I0513 10:00:53.568661 2251 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:00:53.569039 kubelet[2251]: E0513 10:00:53.568994 2251 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 13 10:00:53.746275 kubelet[2251]: E0513 10:00:53.746174 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" May 13 10:00:53.795551 kubelet[2251]: E0513 10:00:53.795476 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:53.797889 containerd[1493]: time="2025-05-13T10:00:53.797845045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10996eeb8142ab621b95f56d96fbe95b,Namespace:kube-system,Attempt:0,}" May 13 10:00:53.798991 kubelet[2251]: E0513 10:00:53.798966 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:53.799285 containerd[1493]: time="2025-05-13T10:00:53.799256908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 10:00:53.802763 kubelet[2251]: E0513 10:00:53.802661 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:53.803039 containerd[1493]: time="2025-05-13T10:00:53.803008457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 10:00:53.821636 containerd[1493]: time="2025-05-13T10:00:53.821568190Z" level=info msg="connecting to shim 67998ca5b43eb3ae93e27323dc1347aafb2c83984b21a6d465b1cf824c85c5c6" address="unix:///run/containerd/s/68f0a5bf3fe19f4fac88bfabbbe76d8b0cb3b78cc8ebae0e540dfc87eddda3a2" namespace=k8s.io protocol=ttrpc version=3 May 13 10:00:53.823203 containerd[1493]: time="2025-05-13T10:00:53.823126398Z" level=info msg="connecting to shim b96316562dc0edfa7c01c28430706cea166c70c31b87d65c04dee23b26b89170" address="unix:///run/containerd/s/682fde40a6d59ff3e59c38fd37279129f9323eb5e1dea9cae9d2947599ad3cdd" namespace=k8s.io protocol=ttrpc version=3 May 13 10:00:53.837665 containerd[1493]: time="2025-05-13T10:00:53.837616734Z" level=info msg="connecting to shim aaf79eb2f0338180f24f63f92f4475e181ccd80deb1b6b6848f7398cbbd0cc90" address="unix:///run/containerd/s/e15e0693f99213a3a572baa52b513312ae06196324cdb9443e97a9147020aa9f" namespace=k8s.io protocol=ttrpc version=3 May 13 10:00:53.853661 systemd[1]: Started cri-containerd-67998ca5b43eb3ae93e27323dc1347aafb2c83984b21a6d465b1cf824c85c5c6.scope - libcontainer container 67998ca5b43eb3ae93e27323dc1347aafb2c83984b21a6d465b1cf824c85c5c6. May 13 10:00:53.854728 systemd[1]: Started cri-containerd-b96316562dc0edfa7c01c28430706cea166c70c31b87d65c04dee23b26b89170.scope - libcontainer container b96316562dc0edfa7c01c28430706cea166c70c31b87d65c04dee23b26b89170. May 13 10:00:53.861732 systemd[1]: Started cri-containerd-aaf79eb2f0338180f24f63f92f4475e181ccd80deb1b6b6848f7398cbbd0cc90.scope - libcontainer container aaf79eb2f0338180f24f63f92f4475e181ccd80deb1b6b6848f7398cbbd0cc90. May 13 10:00:53.896305 containerd[1493]: time="2025-05-13T10:00:53.896261371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10996eeb8142ab621b95f56d96fbe95b,Namespace:kube-system,Attempt:0,} returns sandbox id \"67998ca5b43eb3ae93e27323dc1347aafb2c83984b21a6d465b1cf824c85c5c6\"" May 13 10:00:53.897358 kubelet[2251]: E0513 10:00:53.897320 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:53.898028 containerd[1493]: time="2025-05-13T10:00:53.897999585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"b96316562dc0edfa7c01c28430706cea166c70c31b87d65c04dee23b26b89170\"" May 13 10:00:53.899289 kubelet[2251]: E0513 10:00:53.899268 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:53.899905 containerd[1493]: time="2025-05-13T10:00:53.899874561Z" level=info msg="CreateContainer within sandbox \"67998ca5b43eb3ae93e27323dc1347aafb2c83984b21a6d465b1cf824c85c5c6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 10:00:53.900928 containerd[1493]: time="2025-05-13T10:00:53.900877975Z" level=info msg="CreateContainer within sandbox \"b96316562dc0edfa7c01c28430706cea166c70c31b87d65c04dee23b26b89170\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 10:00:53.905961 containerd[1493]: time="2025-05-13T10:00:53.905923352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf79eb2f0338180f24f63f92f4475e181ccd80deb1b6b6848f7398cbbd0cc90\"" May 13 10:00:53.906794 kubelet[2251]: E0513 10:00:53.906774 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:53.908960 containerd[1493]: time="2025-05-13T10:00:53.908929599Z" level=info msg="CreateContainer within sandbox \"aaf79eb2f0338180f24f63f92f4475e181ccd80deb1b6b6848f7398cbbd0cc90\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 10:00:53.909133 containerd[1493]: time="2025-05-13T10:00:53.909108008Z" level=info msg="Container c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9: CDI devices from CRI Config.CDIDevices: []" May 13 10:00:53.911997 containerd[1493]: time="2025-05-13T10:00:53.911962080Z" level=info msg="Container fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da: CDI devices from CRI Config.CDIDevices: []" May 13 10:00:53.915973 containerd[1493]: time="2025-05-13T10:00:53.915920989Z" level=info msg="Container 95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f: CDI devices from CRI Config.CDIDevices: []" May 13 10:00:53.919461 containerd[1493]: time="2025-05-13T10:00:53.919417023Z" level=info msg="CreateContainer within sandbox \"b96316562dc0edfa7c01c28430706cea166c70c31b87d65c04dee23b26b89170\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da\"" May 13 10:00:53.920221 containerd[1493]: time="2025-05-13T10:00:53.920196746Z" level=info msg="StartContainer for \"fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da\"" May 13 10:00:53.920983 containerd[1493]: time="2025-05-13T10:00:53.920950993Z" level=info msg="CreateContainer within sandbox \"67998ca5b43eb3ae93e27323dc1347aafb2c83984b21a6d465b1cf824c85c5c6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9\"" May 13 10:00:53.921306 containerd[1493]: time="2025-05-13T10:00:53.921281338Z" level=info msg="StartContainer for \"c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9\"" May 13 10:00:53.921351 containerd[1493]: time="2025-05-13T10:00:53.921327138Z" level=info msg="connecting to shim fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da" address="unix:///run/containerd/s/682fde40a6d59ff3e59c38fd37279129f9323eb5e1dea9cae9d2947599ad3cdd" protocol=ttrpc version=3 May 13 10:00:53.922304 containerd[1493]: time="2025-05-13T10:00:53.922270935Z" level=info msg="connecting to shim c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9" address="unix:///run/containerd/s/68f0a5bf3fe19f4fac88bfabbbe76d8b0cb3b78cc8ebae0e540dfc87eddda3a2" protocol=ttrpc version=3 May 13 10:00:53.923830 containerd[1493]: time="2025-05-13T10:00:53.923417859Z" level=info msg="CreateContainer within sandbox \"aaf79eb2f0338180f24f63f92f4475e181ccd80deb1b6b6848f7398cbbd0cc90\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f\"" May 13 10:00:53.924636 containerd[1493]: time="2025-05-13T10:00:53.924560789Z" level=info msg="StartContainer for \"95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f\"" May 13 10:00:53.925702 containerd[1493]: time="2025-05-13T10:00:53.925664309Z" level=info msg="connecting to shim 95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f" address="unix:///run/containerd/s/e15e0693f99213a3a572baa52b513312ae06196324cdb9443e97a9147020aa9f" protocol=ttrpc version=3 May 13 10:00:53.944687 systemd[1]: Started cri-containerd-c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9.scope - libcontainer container c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9. May 13 10:00:53.945920 systemd[1]: Started cri-containerd-fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da.scope - libcontainer container fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da. May 13 10:00:53.949442 systemd[1]: Started cri-containerd-95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f.scope - libcontainer container 95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f. May 13 10:00:53.970828 kubelet[2251]: I0513 10:00:53.970798 2251 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:00:53.971311 kubelet[2251]: E0513 10:00:53.971283 2251 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 13 10:00:53.989185 containerd[1493]: time="2025-05-13T10:00:53.989081437Z" level=info msg="StartContainer for \"fabe303b9a6d6893e92aeb69c480e302b5a1d9107c393e349f4c04814d6193da\" returns successfully" May 13 10:00:53.995831 containerd[1493]: time="2025-05-13T10:00:53.995796788Z" level=info msg="StartContainer for \"c65a7fa31a6e0bfd6234f303b25c2718d4b7e7a9c60a35af76fc6adc2b3285d9\" returns successfully" May 13 10:00:54.004044 containerd[1493]: time="2025-05-13T10:00:54.003619244Z" level=info msg="StartContainer for \"95e3d2fda20335732a91fc9cdba4bfa786332924e93c8a65eb6ad49935ac8d0f\" returns successfully" May 13 10:00:54.171851 kubelet[2251]: E0513 10:00:54.171680 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:54.171851 kubelet[2251]: E0513 10:00:54.171803 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:54.174223 kubelet[2251]: E0513 10:00:54.174203 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:54.174348 kubelet[2251]: E0513 10:00:54.174313 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:54.177029 kubelet[2251]: E0513 10:00:54.177006 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:54.177152 kubelet[2251]: E0513 10:00:54.177114 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:54.772828 kubelet[2251]: I0513 10:00:54.772798 2251 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:00:55.178569 kubelet[2251]: E0513 10:00:55.178459 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:55.178849 kubelet[2251]: E0513 10:00:55.178597 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:55.179204 kubelet[2251]: E0513 10:00:55.179146 2251 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:00:55.179316 kubelet[2251]: E0513 10:00:55.179301 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:55.707782 kubelet[2251]: E0513 10:00:55.707738 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 10:00:55.783695 kubelet[2251]: I0513 10:00:55.783651 2251 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 10:00:55.843131 kubelet[2251]: I0513 10:00:55.843083 2251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:00:55.858045 kubelet[2251]: E0513 10:00:55.858006 2251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 10:00:55.858045 kubelet[2251]: I0513 10:00:55.858039 2251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 10:00:55.860180 kubelet[2251]: E0513 10:00:55.860148 2251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 10:00:55.860180 kubelet[2251]: I0513 10:00:55.860172 2251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:00:55.862175 kubelet[2251]: E0513 10:00:55.862141 2251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 10:00:56.127200 kubelet[2251]: I0513 10:00:56.127100 2251 apiserver.go:52] "Watching apiserver" May 13 10:00:56.143309 kubelet[2251]: I0513 10:00:56.143284 2251 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 10:00:56.178393 kubelet[2251]: I0513 10:00:56.178359 2251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:00:56.178516 kubelet[2251]: I0513 10:00:56.178493 2251 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:00:56.180196 kubelet[2251]: E0513 10:00:56.180171 2251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 10:00:56.180494 kubelet[2251]: E0513 10:00:56.180306 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:56.180914 kubelet[2251]: E0513 10:00:56.180851 2251 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 10:00:56.181031 kubelet[2251]: E0513 10:00:56.181017 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:57.854300 systemd[1]: Reload requested from client PID 2528 ('systemctl') (unit session-7.scope)... May 13 10:00:57.854315 systemd[1]: Reloading... May 13 10:00:57.924569 zram_generator::config[2571]: No configuration found. May 13 10:00:58.063315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:00:58.162304 systemd[1]: Reloading finished in 307 ms. May 13 10:00:58.185250 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:00:58.208506 systemd[1]: kubelet.service: Deactivated successfully. May 13 10:00:58.208867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:58.208923 systemd[1]: kubelet.service: Consumed 2.157s CPU time, 124.1M memory peak. May 13 10:00:58.210753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:00:58.340888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:00:58.345144 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:00:58.380791 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:00:58.380791 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 10:00:58.380791 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:00:58.381201 kubelet[2613]: I0513 10:00:58.380830 2613 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:00:58.388146 kubelet[2613]: I0513 10:00:58.388109 2613 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 10:00:58.388146 kubelet[2613]: I0513 10:00:58.388134 2613 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:00:58.388384 kubelet[2613]: I0513 10:00:58.388356 2613 server.go:954] "Client rotation is on, will bootstrap in background" May 13 10:00:58.389544 kubelet[2613]: I0513 10:00:58.389502 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 10:00:58.391830 kubelet[2613]: I0513 10:00:58.391603 2613 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:00:58.395767 kubelet[2613]: I0513 10:00:58.395668 2613 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 10:00:58.400900 kubelet[2613]: I0513 10:00:58.400870 2613 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:00:58.401109 kubelet[2613]: I0513 10:00:58.401072 2613 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:00:58.401264 kubelet[2613]: I0513 10:00:58.401102 2613 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 10:00:58.401335 kubelet[2613]: I0513 10:00:58.401268 2613 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:00:58.401335 kubelet[2613]: I0513 10:00:58.401277 2613 container_manager_linux.go:304] "Creating device plugin manager" May 13 10:00:58.401335 kubelet[2613]: I0513 10:00:58.401320 2613 state_mem.go:36] "Initialized new in-memory state store" May 13 10:00:58.401479 kubelet[2613]: I0513 10:00:58.401454 2613 kubelet.go:446] "Attempting to sync node with API server" May 13 10:00:58.401479 kubelet[2613]: I0513 10:00:58.401474 2613 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:00:58.401547 kubelet[2613]: I0513 10:00:58.401538 2613 kubelet.go:352] "Adding apiserver pod source" May 13 10:00:58.401575 kubelet[2613]: I0513 10:00:58.401557 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:00:58.403489 kubelet[2613]: I0513 10:00:58.403470 2613 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:00:58.403919 kubelet[2613]: I0513 10:00:58.403895 2613 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:00:58.404319 kubelet[2613]: I0513 10:00:58.404251 2613 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 10:00:58.404319 kubelet[2613]: I0513 10:00:58.404281 2613 server.go:1287] "Started kubelet" May 13 10:00:58.404501 kubelet[2613]: I0513 10:00:58.404434 2613 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:00:58.404768 kubelet[2613]: I0513 10:00:58.404668 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:00:58.404899 kubelet[2613]: I0513 10:00:58.404883 2613 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:00:58.406533 kubelet[2613]: I0513 10:00:58.406482 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:00:58.406863 kubelet[2613]: I0513 10:00:58.406809 2613 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 10:00:58.407300 kubelet[2613]: E0513 10:00:58.407140 2613 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:00:58.407300 kubelet[2613]: I0513 10:00:58.407169 2613 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 10:00:58.408078 kubelet[2613]: I0513 10:00:58.408058 2613 server.go:490] "Adding debug handlers to kubelet server" May 13 10:00:58.413127 kubelet[2613]: I0513 10:00:58.410576 2613 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 10:00:58.414111 kubelet[2613]: I0513 10:00:58.410688 2613 reconciler.go:26] "Reconciler: start to sync state" May 13 10:00:58.421788 kubelet[2613]: I0513 10:00:58.421746 2613 factory.go:221] Registration of the systemd container factory successfully May 13 10:00:58.421852 kubelet[2613]: I0513 10:00:58.421836 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:00:58.422536 kubelet[2613]: I0513 10:00:58.421752 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:00:58.424730 kubelet[2613]: I0513 10:00:58.424616 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:00:58.424730 kubelet[2613]: I0513 10:00:58.424638 2613 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 10:00:58.424730 kubelet[2613]: I0513 10:00:58.424652 2613 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 10:00:58.424730 kubelet[2613]: I0513 10:00:58.424658 2613 kubelet.go:2388] "Starting kubelet main sync loop" May 13 10:00:58.424920 kubelet[2613]: E0513 10:00:58.424887 2613 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:00:58.426033 kubelet[2613]: E0513 10:00:58.425915 2613 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:00:58.431529 kubelet[2613]: I0513 10:00:58.431427 2613 factory.go:221] Registration of the containerd container factory successfully May 13 10:00:58.458091 kubelet[2613]: I0513 10:00:58.458067 2613 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 10:00:58.458091 kubelet[2613]: I0513 10:00:58.458082 2613 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 10:00:58.458233 kubelet[2613]: I0513 10:00:58.458148 2613 state_mem.go:36] "Initialized new in-memory state store" May 13 10:00:58.458506 kubelet[2613]: I0513 10:00:58.458434 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 10:00:58.458506 kubelet[2613]: I0513 10:00:58.458454 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 10:00:58.458506 kubelet[2613]: I0513 10:00:58.458474 2613 policy_none.go:49] "None policy: Start" May 13 10:00:58.458506 kubelet[2613]: I0513 10:00:58.458482 2613 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 10:00:58.458506 kubelet[2613]: I0513 10:00:58.458493 2613 state_mem.go:35] "Initializing new in-memory state store" May 13 10:00:58.458814 kubelet[2613]: I0513 10:00:58.458794 2613 state_mem.go:75] "Updated machine memory state" May 13 10:00:58.462470 kubelet[2613]: I0513 10:00:58.462410 2613 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:00:58.462624 kubelet[2613]: I0513 10:00:58.462598 2613 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 10:00:58.462680 kubelet[2613]: I0513 10:00:58.462613 2613 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:00:58.463032 kubelet[2613]: I0513 10:00:58.462862 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:00:58.464394 kubelet[2613]: E0513 10:00:58.464350 2613 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 10:00:58.526283 kubelet[2613]: I0513 10:00:58.526244 2613 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:00:58.526360 kubelet[2613]: I0513 10:00:58.526300 2613 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 10:00:58.526553 kubelet[2613]: I0513 10:00:58.526539 2613 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:00:58.566083 kubelet[2613]: I0513 10:00:58.566043 2613 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:00:58.571620 kubelet[2613]: I0513 10:00:58.571596 2613 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 10:00:58.571769 kubelet[2613]: I0513 10:00:58.571754 2613 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 10:00:58.615365 kubelet[2613]: I0513 10:00:58.615331 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10996eeb8142ab621b95f56d96fbe95b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10996eeb8142ab621b95f56d96fbe95b\") " pod="kube-system/kube-apiserver-localhost" May 13 10:00:58.615365 kubelet[2613]: I0513 10:00:58.615365 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:58.615465 kubelet[2613]: I0513 10:00:58.615385 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:58.615465 kubelet[2613]: I0513 10:00:58.615403 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:58.615465 kubelet[2613]: I0513 10:00:58.615420 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10996eeb8142ab621b95f56d96fbe95b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10996eeb8142ab621b95f56d96fbe95b\") " pod="kube-system/kube-apiserver-localhost" May 13 10:00:58.615465 kubelet[2613]: I0513 10:00:58.615435 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10996eeb8142ab621b95f56d96fbe95b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10996eeb8142ab621b95f56d96fbe95b\") " pod="kube-system/kube-apiserver-localhost" May 13 10:00:58.615577 kubelet[2613]: I0513 10:00:58.615470 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:58.615577 kubelet[2613]: I0513 10:00:58.615564 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:00:58.615662 kubelet[2613]: I0513 10:00:58.615585 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 10:00:58.833003 kubelet[2613]: E0513 10:00:58.832908 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:58.833003 kubelet[2613]: E0513 10:00:58.832913 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:58.833502 kubelet[2613]: E0513 10:00:58.833446 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:58.856943 sudo[2648]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 10:00:58.857194 sudo[2648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 10:00:59.283700 sudo[2648]: pam_unix(sudo:session): session closed for user root May 13 10:00:59.401751 kubelet[2613]: I0513 10:00:59.401687 2613 apiserver.go:52] "Watching apiserver" May 13 10:00:59.414397 kubelet[2613]: I0513 10:00:59.414354 2613 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 10:00:59.444546 kubelet[2613]: I0513 10:00:59.443999 2613 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:00:59.444546 kubelet[2613]: E0513 10:00:59.444102 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:59.444546 kubelet[2613]: I0513 10:00:59.444410 2613 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:00:59.449951 kubelet[2613]: E0513 10:00:59.449768 2613 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 10:00:59.449951 kubelet[2613]: E0513 10:00:59.449903 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:59.450048 kubelet[2613]: E0513 10:00:59.450009 2613 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 10:00:59.450376 kubelet[2613]: E0513 10:00:59.450359 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:00:59.479491 kubelet[2613]: I0513 10:00:59.479428 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.479383723 podStartE2EDuration="1.479383723s" podCreationTimestamp="2025-05-13 10:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:00:59.478670999 +0000 UTC m=+1.130328028" watchObservedRunningTime="2025-05-13 10:00:59.479383723 +0000 UTC m=+1.131040752" May 13 10:00:59.490236 kubelet[2613]: I0513 10:00:59.490180 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.490165308 podStartE2EDuration="1.490165308s" podCreationTimestamp="2025-05-13 10:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:00:59.489988286 +0000 UTC m=+1.141645315" watchObservedRunningTime="2025-05-13 10:00:59.490165308 +0000 UTC m=+1.141822337" May 13 10:01:00.446479 kubelet[2613]: E0513 10:01:00.446155 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:00.446479 kubelet[2613]: E0513 10:01:00.446280 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:00.446479 kubelet[2613]: E0513 10:01:00.446405 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:00.839111 sudo[1707]: pam_unix(sudo:session): session closed for user root May 13 10:01:00.840231 sshd[1706]: Connection closed by 10.0.0.1 port 44382 May 13 10:01:00.840630 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 13 10:01:00.844570 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:44382.service: Deactivated successfully. May 13 10:01:00.847105 systemd[1]: session-7.scope: Deactivated successfully. May 13 10:01:00.847578 systemd[1]: session-7.scope: Consumed 6.781s CPU time, 269.2M memory peak. May 13 10:01:00.850125 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. May 13 10:01:00.851363 systemd-logind[1479]: Removed session 7. May 13 10:01:01.447844 kubelet[2613]: E0513 10:01:01.447812 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:02.449182 kubelet[2613]: E0513 10:01:02.449094 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:04.906097 kubelet[2613]: I0513 10:01:04.906062 2613 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 10:01:04.907466 containerd[1493]: time="2025-05-13T10:01:04.907338326Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 10:01:04.907719 kubelet[2613]: I0513 10:01:04.907542 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 10:01:05.007625 kubelet[2613]: E0513 10:01:05.007540 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:05.022145 kubelet[2613]: I0513 10:01:05.022091 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.02208114 podStartE2EDuration="7.02208114s" podCreationTimestamp="2025-05-13 10:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:00:59.497728925 +0000 UTC m=+1.149385954" watchObservedRunningTime="2025-05-13 10:01:05.02208114 +0000 UTC m=+6.673738129" May 13 10:01:05.453790 kubelet[2613]: E0513 10:01:05.453697 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:05.673887 kubelet[2613]: W0513 10:01:05.673857 2613 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 10:01:05.673984 kubelet[2613]: E0513 10:01:05.673895 2613 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 10:01:05.681873 systemd[1]: Created slice kubepods-besteffort-pode3c5b23a_2db5_4bf4_af3e_8144069698aa.slice - libcontainer container kubepods-besteffort-pode3c5b23a_2db5_4bf4_af3e_8144069698aa.slice. May 13 10:01:05.699436 systemd[1]: Created slice kubepods-burstable-pod3d296026_a34d_4f60_8999_0d54c59fa524.slice - libcontainer container kubepods-burstable-pod3d296026_a34d_4f60_8999_0d54c59fa524.slice. May 13 10:01:05.761650 kubelet[2613]: I0513 10:01:05.761055 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-bpf-maps\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761650 kubelet[2613]: I0513 10:01:05.761095 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-xtables-lock\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761650 kubelet[2613]: I0513 10:01:05.761112 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-config-path\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761650 kubelet[2613]: I0513 10:01:05.761138 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6qjm\" (UniqueName: \"kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-kube-api-access-d6qjm\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761650 kubelet[2613]: I0513 10:01:05.761156 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-etc-cni-netd\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761650 kubelet[2613]: I0513 10:01:05.761170 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-cgroup\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761835 kubelet[2613]: I0513 10:01:05.761187 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d296026-a34d-4f60-8999-0d54c59fa524-clustermesh-secrets\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761835 kubelet[2613]: I0513 10:01:05.761201 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt98b\" (UniqueName: \"kubernetes.io/projected/e3c5b23a-2db5-4bf4-af3e-8144069698aa-kube-api-access-qt98b\") pod \"kube-proxy-8c26m\" (UID: \"e3c5b23a-2db5-4bf4-af3e-8144069698aa\") " pod="kube-system/kube-proxy-8c26m" May 13 10:01:05.761835 kubelet[2613]: I0513 10:01:05.761216 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-hostproc\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761835 kubelet[2613]: I0513 10:01:05.761230 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cni-path\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761835 kubelet[2613]: I0513 10:01:05.761247 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3c5b23a-2db5-4bf4-af3e-8144069698aa-lib-modules\") pod \"kube-proxy-8c26m\" (UID: \"e3c5b23a-2db5-4bf4-af3e-8144069698aa\") " pod="kube-system/kube-proxy-8c26m" May 13 10:01:05.761835 kubelet[2613]: I0513 10:01:05.761318 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-run\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761944 kubelet[2613]: I0513 10:01:05.761397 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-hubble-tls\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761944 kubelet[2613]: I0513 10:01:05.761416 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3c5b23a-2db5-4bf4-af3e-8144069698aa-kube-proxy\") pod \"kube-proxy-8c26m\" (UID: \"e3c5b23a-2db5-4bf4-af3e-8144069698aa\") " pod="kube-system/kube-proxy-8c26m" May 13 10:01:05.761944 kubelet[2613]: I0513 10:01:05.761472 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3c5b23a-2db5-4bf4-af3e-8144069698aa-xtables-lock\") pod \"kube-proxy-8c26m\" (UID: \"e3c5b23a-2db5-4bf4-af3e-8144069698aa\") " pod="kube-system/kube-proxy-8c26m" May 13 10:01:05.761944 kubelet[2613]: I0513 10:01:05.761489 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-lib-modules\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761944 kubelet[2613]: I0513 10:01:05.761550 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-net\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:05.761944 kubelet[2613]: I0513 10:01:05.761569 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-kernel\") pod \"cilium-9b4vw\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " pod="kube-system/cilium-9b4vw" May 13 10:01:06.002913 kubelet[2613]: E0513 10:01:06.002566 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:06.004218 containerd[1493]: time="2025-05-13T10:01:06.003200120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9b4vw,Uid:3d296026-a34d-4f60-8999-0d54c59fa524,Namespace:kube-system,Attempt:0,}" May 13 10:01:06.032662 containerd[1493]: time="2025-05-13T10:01:06.030055366Z" level=info msg="connecting to shim 6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81" address="unix:///run/containerd/s/5fcf0fcb3a2f08ebf4e496cc6bd37fa73619417e74d1fe937c5d92615d7ec105" namespace=k8s.io protocol=ttrpc version=3 May 13 10:01:06.036297 systemd[1]: Created slice kubepods-besteffort-podd574392a_42d2_494a_9ad9_ba13b5d31061.slice - libcontainer container kubepods-besteffort-podd574392a_42d2_494a_9ad9_ba13b5d31061.slice. May 13 10:01:06.064793 kubelet[2613]: I0513 10:01:06.064707 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d574392a-42d2-494a-9ad9-ba13b5d31061-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g5jm5\" (UID: \"d574392a-42d2-494a-9ad9-ba13b5d31061\") " pod="kube-system/cilium-operator-6c4d7847fc-g5jm5" May 13 10:01:06.064793 kubelet[2613]: I0513 10:01:06.064798 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf6gv\" (UniqueName: \"kubernetes.io/projected/d574392a-42d2-494a-9ad9-ba13b5d31061-kube-api-access-pf6gv\") pod \"cilium-operator-6c4d7847fc-g5jm5\" (UID: \"d574392a-42d2-494a-9ad9-ba13b5d31061\") " pod="kube-system/cilium-operator-6c4d7847fc-g5jm5" May 13 10:01:06.079728 systemd[1]: Started cri-containerd-6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81.scope - libcontainer container 6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81. May 13 10:01:06.099442 containerd[1493]: time="2025-05-13T10:01:06.099397593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9b4vw,Uid:3d296026-a34d-4f60-8999-0d54c59fa524,Namespace:kube-system,Attempt:0,} returns sandbox id \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\"" May 13 10:01:06.100074 kubelet[2613]: E0513 10:01:06.100052 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:06.101116 containerd[1493]: time="2025-05-13T10:01:06.101093252Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 10:01:06.340950 kubelet[2613]: E0513 10:01:06.340473 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:06.341049 containerd[1493]: time="2025-05-13T10:01:06.340841028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g5jm5,Uid:d574392a-42d2-494a-9ad9-ba13b5d31061,Namespace:kube-system,Attempt:0,}" May 13 10:01:06.358091 containerd[1493]: time="2025-05-13T10:01:06.357655308Z" level=info msg="connecting to shim 59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95" address="unix:///run/containerd/s/4ea29379930e7db97c8d41fb5bca8207964cf1ba208433fa9eda68f679b167c9" namespace=k8s.io protocol=ttrpc version=3 May 13 10:01:06.378693 systemd[1]: Started cri-containerd-59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95.scope - libcontainer container 59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95. May 13 10:01:06.409010 containerd[1493]: time="2025-05-13T10:01:06.408960542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g5jm5,Uid:d574392a-42d2-494a-9ad9-ba13b5d31061,Namespace:kube-system,Attempt:0,} returns sandbox id \"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\"" May 13 10:01:06.409780 kubelet[2613]: E0513 10:01:06.409750 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:06.599909 kubelet[2613]: E0513 10:01:06.599026 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:06.600011 containerd[1493]: time="2025-05-13T10:01:06.599390329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8c26m,Uid:e3c5b23a-2db5-4bf4-af3e-8144069698aa,Namespace:kube-system,Attempt:0,}" May 13 10:01:06.614679 containerd[1493]: time="2025-05-13T10:01:06.614641420Z" level=info msg="connecting to shim c85c38744b20e55ebf8274f6f391b5c2e9f5db68c83ade14ceb1b64a0c5292ed" address="unix:///run/containerd/s/45555e9c601e7d58e1f2c9688228fc7770c8a6bb4a3d5ac04ecc36a0c738f8ba" namespace=k8s.io protocol=ttrpc version=3 May 13 10:01:06.635740 systemd[1]: Started cri-containerd-c85c38744b20e55ebf8274f6f391b5c2e9f5db68c83ade14ceb1b64a0c5292ed.scope - libcontainer container c85c38744b20e55ebf8274f6f391b5c2e9f5db68c83ade14ceb1b64a0c5292ed. May 13 10:01:06.656243 containerd[1493]: time="2025-05-13T10:01:06.656208475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8c26m,Uid:e3c5b23a-2db5-4bf4-af3e-8144069698aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"c85c38744b20e55ebf8274f6f391b5c2e9f5db68c83ade14ceb1b64a0c5292ed\"" May 13 10:01:06.656952 kubelet[2613]: E0513 10:01:06.656934 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:06.659654 containerd[1493]: time="2025-05-13T10:01:06.659610836Z" level=info msg="CreateContainer within sandbox \"c85c38744b20e55ebf8274f6f391b5c2e9f5db68c83ade14ceb1b64a0c5292ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 10:01:06.667582 containerd[1493]: time="2025-05-13T10:01:06.667548451Z" level=info msg="Container 18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:06.673901 containerd[1493]: time="2025-05-13T10:01:06.673868025Z" level=info msg="CreateContainer within sandbox \"c85c38744b20e55ebf8274f6f391b5c2e9f5db68c83ade14ceb1b64a0c5292ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef\"" May 13 10:01:06.674408 containerd[1493]: time="2025-05-13T10:01:06.674345812Z" level=info msg="StartContainer for \"18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef\"" May 13 10:01:06.676232 containerd[1493]: time="2025-05-13T10:01:06.676182742Z" level=info msg="connecting to shim 18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef" address="unix:///run/containerd/s/45555e9c601e7d58e1f2c9688228fc7770c8a6bb4a3d5ac04ecc36a0c738f8ba" protocol=ttrpc version=3 May 13 10:01:06.698653 systemd[1]: Started cri-containerd-18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef.scope - libcontainer container 18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef. May 13 10:01:06.732665 containerd[1493]: time="2025-05-13T10:01:06.732546987Z" level=info msg="StartContainer for \"18151ce5c44f92369c9c5b0f468103c275052a4260c3ac99b6c71f191aa189ef\" returns successfully" May 13 10:01:07.463538 kubelet[2613]: E0513 10:01:07.463489 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:07.473006 kubelet[2613]: I0513 10:01:07.472787 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8c26m" podStartSLOduration=2.472771639 podStartE2EDuration="2.472771639s" podCreationTimestamp="2025-05-13 10:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:01:07.472742272 +0000 UTC m=+9.124399302" watchObservedRunningTime="2025-05-13 10:01:07.472771639 +0000 UTC m=+9.124428668" May 13 10:01:09.696531 kubelet[2613]: E0513 10:01:09.696457 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:10.751592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185219691.mount: Deactivated successfully. May 13 10:01:11.851987 kubelet[2613]: E0513 10:01:11.851940 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:11.948405 containerd[1493]: time="2025-05-13T10:01:11.948345230Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:11.949695 containerd[1493]: time="2025-05-13T10:01:11.949657574Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 10:01:11.950556 containerd[1493]: time="2025-05-13T10:01:11.950488396Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:11.952116 containerd[1493]: time="2025-05-13T10:01:11.952069147Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.850737693s" May 13 10:01:11.952116 containerd[1493]: time="2025-05-13T10:01:11.952108554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 10:01:11.970910 containerd[1493]: time="2025-05-13T10:01:11.970843438Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 10:01:11.976884 containerd[1493]: time="2025-05-13T10:01:11.976796096Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 10:01:11.983552 containerd[1493]: time="2025-05-13T10:01:11.982946588Z" level=info msg="Container ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:11.988164 containerd[1493]: time="2025-05-13T10:01:11.988132955Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\"" May 13 10:01:11.990454 containerd[1493]: time="2025-05-13T10:01:11.990424027Z" level=info msg="StartContainer for \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\"" May 13 10:01:11.991472 containerd[1493]: time="2025-05-13T10:01:11.991424558Z" level=info msg="connecting to shim ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05" address="unix:///run/containerd/s/5fcf0fcb3a2f08ebf4e496cc6bd37fa73619417e74d1fe937c5d92615d7ec105" protocol=ttrpc version=3 May 13 10:01:12.040729 systemd[1]: Started cri-containerd-ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05.scope - libcontainer container ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05. May 13 10:01:12.083082 containerd[1493]: time="2025-05-13T10:01:12.083029595Z" level=info msg="StartContainer for \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" returns successfully" May 13 10:01:12.110774 systemd[1]: cri-containerd-ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05.scope: Deactivated successfully. May 13 10:01:12.174882 containerd[1493]: time="2025-05-13T10:01:12.174821223Z" level=info msg="received exit event container_id:\"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" id:\"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" pid:3033 exited_at:{seconds:1747130472 nanos:134841810}" May 13 10:01:12.178842 containerd[1493]: time="2025-05-13T10:01:12.178790227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" id:\"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" pid:3033 exited_at:{seconds:1747130472 nanos:134841810}" May 13 10:01:12.209647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05-rootfs.mount: Deactivated successfully. May 13 10:01:12.474611 kubelet[2613]: E0513 10:01:12.474374 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:12.474611 kubelet[2613]: E0513 10:01:12.474462 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:12.477599 containerd[1493]: time="2025-05-13T10:01:12.477508143Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 10:01:12.498258 containerd[1493]: time="2025-05-13T10:01:12.498208465Z" level=info msg="Container bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:12.502734 containerd[1493]: time="2025-05-13T10:01:12.502690593Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\"" May 13 10:01:12.503105 containerd[1493]: time="2025-05-13T10:01:12.503078576Z" level=info msg="StartContainer for \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\"" May 13 10:01:12.503942 containerd[1493]: time="2025-05-13T10:01:12.503909951Z" level=info msg="connecting to shim bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6" address="unix:///run/containerd/s/5fcf0fcb3a2f08ebf4e496cc6bd37fa73619417e74d1fe937c5d92615d7ec105" protocol=ttrpc version=3 May 13 10:01:12.521656 systemd[1]: Started cri-containerd-bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6.scope - libcontainer container bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6. May 13 10:01:12.556086 containerd[1493]: time="2025-05-13T10:01:12.556046338Z" level=info msg="StartContainer for \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" returns successfully" May 13 10:01:12.577314 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 10:01:12.577556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:12.577954 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 10:01:12.579654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:01:12.579973 systemd[1]: cri-containerd-bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6.scope: Deactivated successfully. May 13 10:01:12.580219 systemd[1]: cri-containerd-bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6.scope: Consumed 35ms CPU time, 7M memory peak, 6M read from disk, 4K written to disk. May 13 10:01:12.581497 containerd[1493]: time="2025-05-13T10:01:12.581460626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" id:\"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" pid:3078 exited_at:{seconds:1747130472 nanos:581061881}" May 13 10:01:12.581611 containerd[1493]: time="2025-05-13T10:01:12.581591607Z" level=info msg="received exit event container_id:\"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" id:\"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" pid:3078 exited_at:{seconds:1747130472 nanos:581061881}" May 13 10:01:12.610780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:01:13.320330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365136319.mount: Deactivated successfully. May 13 10:01:13.479711 kubelet[2613]: E0513 10:01:13.479665 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:13.483186 containerd[1493]: time="2025-05-13T10:01:13.482995484Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 10:01:13.500756 containerd[1493]: time="2025-05-13T10:01:13.500703416Z" level=info msg="Container 957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:13.517024 containerd[1493]: time="2025-05-13T10:01:13.516958005Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\"" May 13 10:01:13.518542 containerd[1493]: time="2025-05-13T10:01:13.518347299Z" level=info msg="StartContainer for \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\"" May 13 10:01:13.520297 containerd[1493]: time="2025-05-13T10:01:13.520262595Z" level=info msg="connecting to shim 957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b" address="unix:///run/containerd/s/5fcf0fcb3a2f08ebf4e496cc6bd37fa73619417e74d1fe937c5d92615d7ec105" protocol=ttrpc version=3 May 13 10:01:13.548715 systemd[1]: Started cri-containerd-957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b.scope - libcontainer container 957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b. May 13 10:01:13.583169 containerd[1493]: time="2025-05-13T10:01:13.582704951Z" level=info msg="StartContainer for \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" returns successfully" May 13 10:01:13.599351 systemd[1]: cri-containerd-957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b.scope: Deactivated successfully. May 13 10:01:13.601291 containerd[1493]: time="2025-05-13T10:01:13.601246212Z" level=info msg="received exit event container_id:\"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" id:\"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" pid:3133 exited_at:{seconds:1747130473 nanos:601005655}" May 13 10:01:13.602706 containerd[1493]: time="2025-05-13T10:01:13.602677873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" id:\"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" pid:3133 exited_at:{seconds:1747130473 nanos:601005655}" May 13 10:01:13.904041 containerd[1493]: time="2025-05-13T10:01:13.903985371Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:13.904557 containerd[1493]: time="2025-05-13T10:01:13.904505011Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 10:01:13.905341 containerd[1493]: time="2025-05-13T10:01:13.905301894Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:01:13.906576 containerd[1493]: time="2025-05-13T10:01:13.906548326Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.935320863s" May 13 10:01:13.906618 containerd[1493]: time="2025-05-13T10:01:13.906581932Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 10:01:13.908877 containerd[1493]: time="2025-05-13T10:01:13.908803755Z" level=info msg="CreateContainer within sandbox \"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 10:01:13.940284 containerd[1493]: time="2025-05-13T10:01:13.940233165Z" level=info msg="Container 67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:13.947097 containerd[1493]: time="2025-05-13T10:01:13.947048977Z" level=info msg="CreateContainer within sandbox \"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\"" May 13 10:01:13.947433 containerd[1493]: time="2025-05-13T10:01:13.947394550Z" level=info msg="StartContainer for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\"" May 13 10:01:13.948866 containerd[1493]: time="2025-05-13T10:01:13.948543127Z" level=info msg="connecting to shim 67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67" address="unix:///run/containerd/s/4ea29379930e7db97c8d41fb5bca8207964cf1ba208433fa9eda68f679b167c9" protocol=ttrpc version=3 May 13 10:01:13.969663 systemd[1]: Started cri-containerd-67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67.scope - libcontainer container 67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67. May 13 10:01:13.994834 containerd[1493]: time="2025-05-13T10:01:13.994755419Z" level=info msg="StartContainer for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" returns successfully" May 13 10:01:14.487688 kubelet[2613]: E0513 10:01:14.487650 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:14.495133 update_engine[1481]: I20250513 10:01:14.495057 1481 update_attempter.cc:509] Updating boot flags... May 13 10:01:14.501389 kubelet[2613]: E0513 10:01:14.501348 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:14.507987 containerd[1493]: time="2025-05-13T10:01:14.507945245Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 10:01:14.544912 kubelet[2613]: I0513 10:01:14.544719 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g5jm5" podStartSLOduration=1.047841645 podStartE2EDuration="8.544702438s" podCreationTimestamp="2025-05-13 10:01:06 +0000 UTC" firstStartedPulling="2025-05-13 10:01:06.410683087 +0000 UTC m=+8.062340116" lastFinishedPulling="2025-05-13 10:01:13.90754388 +0000 UTC m=+15.559200909" observedRunningTime="2025-05-13 10:01:14.523827055 +0000 UTC m=+16.175484084" watchObservedRunningTime="2025-05-13 10:01:14.544702438 +0000 UTC m=+16.196359467" May 13 10:01:14.562282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596385897.mount: Deactivated successfully. May 13 10:01:14.567629 containerd[1493]: time="2025-05-13T10:01:14.567007351Z" level=info msg="Container bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:14.579179 containerd[1493]: time="2025-05-13T10:01:14.579131690Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\"" May 13 10:01:14.583092 containerd[1493]: time="2025-05-13T10:01:14.581967906Z" level=info msg="StartContainer for \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\"" May 13 10:01:14.592643 containerd[1493]: time="2025-05-13T10:01:14.592564821Z" level=info msg="connecting to shim bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5" address="unix:///run/containerd/s/5fcf0fcb3a2f08ebf4e496cc6bd37fa73619417e74d1fe937c5d92615d7ec105" protocol=ttrpc version=3 May 13 10:01:14.671796 systemd[1]: Started cri-containerd-bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5.scope - libcontainer container bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5. May 13 10:01:14.733656 systemd[1]: cri-containerd-bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5.scope: Deactivated successfully. May 13 10:01:14.734687 containerd[1493]: time="2025-05-13T10:01:14.734456362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" id:\"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" pid:3229 exited_at:{seconds:1747130474 nanos:733886758}" May 13 10:01:14.735906 containerd[1493]: time="2025-05-13T10:01:14.735846806Z" level=info msg="received exit event container_id:\"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" id:\"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" pid:3229 exited_at:{seconds:1747130474 nanos:733886758}" May 13 10:01:14.744463 containerd[1493]: time="2025-05-13T10:01:14.744311168Z" level=info msg="StartContainer for \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" returns successfully" May 13 10:01:14.984001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5-rootfs.mount: Deactivated successfully. May 13 10:01:15.506983 kubelet[2613]: E0513 10:01:15.506953 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:15.507356 kubelet[2613]: E0513 10:01:15.507037 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:15.510628 containerd[1493]: time="2025-05-13T10:01:15.510566843Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 10:01:15.522662 containerd[1493]: time="2025-05-13T10:01:15.521920989Z" level=info msg="Container b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:15.528548 containerd[1493]: time="2025-05-13T10:01:15.528389532Z" level=info msg="CreateContainer within sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\"" May 13 10:01:15.529032 containerd[1493]: time="2025-05-13T10:01:15.528966852Z" level=info msg="StartContainer for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\"" May 13 10:01:15.529906 containerd[1493]: time="2025-05-13T10:01:15.529882540Z" level=info msg="connecting to shim b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8" address="unix:///run/containerd/s/5fcf0fcb3a2f08ebf4e496cc6bd37fa73619417e74d1fe937c5d92615d7ec105" protocol=ttrpc version=3 May 13 10:01:15.555671 systemd[1]: Started cri-containerd-b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8.scope - libcontainer container b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8. May 13 10:01:15.592095 containerd[1493]: time="2025-05-13T10:01:15.592052260Z" level=info msg="StartContainer for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" returns successfully" May 13 10:01:15.702764 containerd[1493]: time="2025-05-13T10:01:15.702721912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" id:\"e57e38e5d7f3fb7ccab630c3882c0d7c7057dbbb0ab0611d03987844441c1022\" pid:3296 exited_at:{seconds:1747130475 nanos:702327177}" May 13 10:01:15.709457 kubelet[2613]: I0513 10:01:15.709430 2613 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 10:01:15.740732 systemd[1]: Created slice kubepods-burstable-pod9f298940_d2ed_4d85_b1e5_f55a90b436a9.slice - libcontainer container kubepods-burstable-pod9f298940_d2ed_4d85_b1e5_f55a90b436a9.slice. May 13 10:01:15.757892 systemd[1]: Created slice kubepods-burstable-pod0dfa1211_3ff7_4e6e_b873_bb4252e4a052.slice - libcontainer container kubepods-burstable-pod0dfa1211_3ff7_4e6e_b873_bb4252e4a052.slice. May 13 10:01:15.871678 kubelet[2613]: I0513 10:01:15.871635 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f298940-d2ed-4d85-b1e5-f55a90b436a9-config-volume\") pod \"coredns-668d6bf9bc-297wh\" (UID: \"9f298940-d2ed-4d85-b1e5-f55a90b436a9\") " pod="kube-system/coredns-668d6bf9bc-297wh" May 13 10:01:15.871956 kubelet[2613]: I0513 10:01:15.871838 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t62dk\" (UniqueName: \"kubernetes.io/projected/9f298940-d2ed-4d85-b1e5-f55a90b436a9-kube-api-access-t62dk\") pod \"coredns-668d6bf9bc-297wh\" (UID: \"9f298940-d2ed-4d85-b1e5-f55a90b436a9\") " pod="kube-system/coredns-668d6bf9bc-297wh" May 13 10:01:15.871956 kubelet[2613]: I0513 10:01:15.871899 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfa1211-3ff7-4e6e-b873-bb4252e4a052-config-volume\") pod \"coredns-668d6bf9bc-znqk2\" (UID: \"0dfa1211-3ff7-4e6e-b873-bb4252e4a052\") " pod="kube-system/coredns-668d6bf9bc-znqk2" May 13 10:01:15.871956 kubelet[2613]: I0513 10:01:15.871927 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgst5\" (UniqueName: \"kubernetes.io/projected/0dfa1211-3ff7-4e6e-b873-bb4252e4a052-kube-api-access-qgst5\") pod \"coredns-668d6bf9bc-znqk2\" (UID: \"0dfa1211-3ff7-4e6e-b873-bb4252e4a052\") " pod="kube-system/coredns-668d6bf9bc-znqk2" May 13 10:01:16.046941 kubelet[2613]: E0513 10:01:16.046836 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:16.047764 containerd[1493]: time="2025-05-13T10:01:16.047705091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-297wh,Uid:9f298940-d2ed-4d85-b1e5-f55a90b436a9,Namespace:kube-system,Attempt:0,}" May 13 10:01:16.065586 kubelet[2613]: E0513 10:01:16.063386 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:16.065919 containerd[1493]: time="2025-05-13T10:01:16.065887669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-znqk2,Uid:0dfa1211-3ff7-4e6e-b873-bb4252e4a052,Namespace:kube-system,Attempt:0,}" May 13 10:01:16.513563 kubelet[2613]: E0513 10:01:16.513503 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:16.528698 kubelet[2613]: I0513 10:01:16.528639 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9b4vw" podStartSLOduration=5.65888291 podStartE2EDuration="11.528623751s" podCreationTimestamp="2025-05-13 10:01:05 +0000 UTC" firstStartedPulling="2025-05-13 10:01:06.100783943 +0000 UTC m=+7.752440972" lastFinishedPulling="2025-05-13 10:01:11.970524784 +0000 UTC m=+13.622181813" observedRunningTime="2025-05-13 10:01:16.528172491 +0000 UTC m=+18.179829520" watchObservedRunningTime="2025-05-13 10:01:16.528623751 +0000 UTC m=+18.180280780" May 13 10:01:17.514320 kubelet[2613]: E0513 10:01:17.514279 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:17.981106 systemd-networkd[1434]: cilium_host: Link UP May 13 10:01:17.981221 systemd-networkd[1434]: cilium_net: Link UP May 13 10:01:17.981351 systemd-networkd[1434]: cilium_net: Gained carrier May 13 10:01:17.981468 systemd-networkd[1434]: cilium_host: Gained carrier May 13 10:01:18.061025 systemd-networkd[1434]: cilium_vxlan: Link UP May 13 10:01:18.061031 systemd-networkd[1434]: cilium_vxlan: Gained carrier May 13 10:01:18.189680 systemd-networkd[1434]: cilium_host: Gained IPv6LL May 13 10:01:18.347564 kernel: NET: Registered PF_ALG protocol family May 13 10:01:18.518065 kubelet[2613]: E0513 10:01:18.517708 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:18.878014 systemd-networkd[1434]: lxc_health: Link UP May 13 10:01:18.885063 systemd-networkd[1434]: lxc_health: Gained carrier May 13 10:01:18.917855 systemd-networkd[1434]: cilium_net: Gained IPv6LL May 13 10:01:19.186056 systemd-networkd[1434]: lxcccec7487cd99: Link UP May 13 10:01:19.195531 kernel: eth0: renamed from tmp39596 May 13 10:01:19.203882 kernel: eth0: renamed from tmpf7414 May 13 10:01:19.203894 systemd-networkd[1434]: lxcd3dff8733d95: Link UP May 13 10:01:19.204460 systemd-networkd[1434]: lxcccec7487cd99: Gained carrier May 13 10:01:19.204750 systemd-networkd[1434]: lxcd3dff8733d95: Gained carrier May 13 10:01:20.005872 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL May 13 10:01:20.007870 kubelet[2613]: E0513 10:01:20.007840 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:20.901737 systemd-networkd[1434]: lxc_health: Gained IPv6LL May 13 10:01:21.029614 systemd-networkd[1434]: lxcccec7487cd99: Gained IPv6LL May 13 10:01:21.221666 systemd-networkd[1434]: lxcd3dff8733d95: Gained IPv6LL May 13 10:01:22.599181 containerd[1493]: time="2025-05-13T10:01:22.599137388Z" level=info msg="connecting to shim 3959685517d0e32fc997fc8e8a40124127884467852bae884693d6d0b9c498dc" address="unix:///run/containerd/s/5b5a16fea5d80045e81675c2b9564e29dc86cfbb5a7612ae4518856677781283" namespace=k8s.io protocol=ttrpc version=3 May 13 10:01:22.601578 containerd[1493]: time="2025-05-13T10:01:22.601530989Z" level=info msg="connecting to shim f74140c918718ac9f667572fa490cdc94a463160ec8d0d777f201e9ef06828d7" address="unix:///run/containerd/s/e38f63f972571b7c4b9bd65fe022ebb8110ac0c567953516257e6e8b0b9a4d43" namespace=k8s.io protocol=ttrpc version=3 May 13 10:01:22.623638 systemd[1]: Started cri-containerd-3959685517d0e32fc997fc8e8a40124127884467852bae884693d6d0b9c498dc.scope - libcontainer container 3959685517d0e32fc997fc8e8a40124127884467852bae884693d6d0b9c498dc. May 13 10:01:22.624631 systemd[1]: Started cri-containerd-f74140c918718ac9f667572fa490cdc94a463160ec8d0d777f201e9ef06828d7.scope - libcontainer container f74140c918718ac9f667572fa490cdc94a463160ec8d0d777f201e9ef06828d7. May 13 10:01:22.634345 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:01:22.635153 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:01:22.656731 containerd[1493]: time="2025-05-13T10:01:22.656700872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-znqk2,Uid:0dfa1211-3ff7-4e6e-b873-bb4252e4a052,Namespace:kube-system,Attempt:0,} returns sandbox id \"3959685517d0e32fc997fc8e8a40124127884467852bae884693d6d0b9c498dc\"" May 13 10:01:22.659313 kubelet[2613]: E0513 10:01:22.659157 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:22.668268 containerd[1493]: time="2025-05-13T10:01:22.668233595Z" level=info msg="CreateContainer within sandbox \"3959685517d0e32fc997fc8e8a40124127884467852bae884693d6d0b9c498dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:01:22.669353 containerd[1493]: time="2025-05-13T10:01:22.669331066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-297wh,Uid:9f298940-d2ed-4d85-b1e5-f55a90b436a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f74140c918718ac9f667572fa490cdc94a463160ec8d0d777f201e9ef06828d7\"" May 13 10:01:22.670063 kubelet[2613]: E0513 10:01:22.670040 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:22.671749 containerd[1493]: time="2025-05-13T10:01:22.671725667Z" level=info msg="CreateContainer within sandbox \"f74140c918718ac9f667572fa490cdc94a463160ec8d0d777f201e9ef06828d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:01:22.680646 containerd[1493]: time="2025-05-13T10:01:22.680620604Z" level=info msg="Container a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:22.683982 containerd[1493]: time="2025-05-13T10:01:22.683951060Z" level=info msg="Container 4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029: CDI devices from CRI Config.CDIDevices: []" May 13 10:01:22.688826 containerd[1493]: time="2025-05-13T10:01:22.688772747Z" level=info msg="CreateContainer within sandbox \"f74140c918718ac9f667572fa490cdc94a463160ec8d0d777f201e9ef06828d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029\"" May 13 10:01:22.689351 containerd[1493]: time="2025-05-13T10:01:22.689248634Z" level=info msg="StartContainer for \"4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029\"" May 13 10:01:22.690407 containerd[1493]: time="2025-05-13T10:01:22.690373068Z" level=info msg="connecting to shim 4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029" address="unix:///run/containerd/s/e38f63f972571b7c4b9bd65fe022ebb8110ac0c567953516257e6e8b0b9a4d43" protocol=ttrpc version=3 May 13 10:01:22.703683 containerd[1493]: time="2025-05-13T10:01:22.703650327Z" level=info msg="CreateContainer within sandbox \"3959685517d0e32fc997fc8e8a40124127884467852bae884693d6d0b9c498dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43\"" May 13 10:01:22.704528 containerd[1493]: time="2025-05-13T10:01:22.704184021Z" level=info msg="StartContainer for \"a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43\"" May 13 10:01:22.704985 containerd[1493]: time="2025-05-13T10:01:22.704956258Z" level=info msg="connecting to shim a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43" address="unix:///run/containerd/s/5b5a16fea5d80045e81675c2b9564e29dc86cfbb5a7612ae4518856677781283" protocol=ttrpc version=3 May 13 10:01:22.707674 systemd[1]: Started cri-containerd-4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029.scope - libcontainer container 4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029. May 13 10:01:22.724745 systemd[1]: Started cri-containerd-a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43.scope - libcontainer container a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43. May 13 10:01:22.742549 containerd[1493]: time="2025-05-13T10:01:22.742433038Z" level=info msg="StartContainer for \"4459e9804c4bc6daeab02a2a8b6ffb2cd827af1da0bd597926643eff65585029\" returns successfully" May 13 10:01:22.779967 containerd[1493]: time="2025-05-13T10:01:22.777716156Z" level=info msg="StartContainer for \"a65bf4c9912703325b7d46b627753f7b2a5c34d74ab42d5b359c956ac71c7e43\" returns successfully" May 13 10:01:23.529355 kubelet[2613]: E0513 10:01:23.529319 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:23.533113 kubelet[2613]: E0513 10:01:23.533023 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:23.547622 kubelet[2613]: I0513 10:01:23.547406 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-297wh" podStartSLOduration=17.547391495 podStartE2EDuration="17.547391495s" podCreationTimestamp="2025-05-13 10:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:01:23.547039701 +0000 UTC m=+25.198696730" watchObservedRunningTime="2025-05-13 10:01:23.547391495 +0000 UTC m=+25.199048524" May 13 10:01:23.573313 kubelet[2613]: I0513 10:01:23.570589 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-znqk2" podStartSLOduration=17.570564213 podStartE2EDuration="17.570564213s" podCreationTimestamp="2025-05-13 10:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:01:23.569573758 +0000 UTC m=+25.221230787" watchObservedRunningTime="2025-05-13 10:01:23.570564213 +0000 UTC m=+25.222221282" May 13 10:01:23.575854 kubelet[2613]: I0513 10:01:23.575827 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:01:23.576260 kubelet[2613]: E0513 10:01:23.576180 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:23.585543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185656078.mount: Deactivated successfully. May 13 10:01:24.534428 kubelet[2613]: E0513 10:01:24.534141 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:24.534428 kubelet[2613]: E0513 10:01:24.534219 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:24.534428 kubelet[2613]: E0513 10:01:24.534344 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:25.536019 kubelet[2613]: E0513 10:01:25.535962 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:25.536019 kubelet[2613]: E0513 10:01:25.535991 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:01:26.379444 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:34756.service - OpenSSH per-connection server daemon (10.0.0.1:34756). May 13 10:01:26.442313 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 34756 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:26.443587 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:26.447618 systemd-logind[1479]: New session 8 of user core. May 13 10:01:26.457679 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 10:01:26.584192 sshd[3947]: Connection closed by 10.0.0.1 port 34756 May 13 10:01:26.584502 sshd-session[3945]: pam_unix(sshd:session): session closed for user core May 13 10:01:26.587189 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:34756.service: Deactivated successfully. May 13 10:01:26.590096 systemd[1]: session-8.scope: Deactivated successfully. May 13 10:01:26.592796 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. May 13 10:01:26.593673 systemd-logind[1479]: Removed session 8. May 13 10:01:31.599719 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:34760.service - OpenSSH per-connection server daemon (10.0.0.1:34760). May 13 10:01:31.655379 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 34760 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:31.657077 sshd-session[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:31.661095 systemd-logind[1479]: New session 9 of user core. May 13 10:01:31.667650 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 10:01:31.777004 sshd[3969]: Connection closed by 10.0.0.1 port 34760 May 13 10:01:31.777559 sshd-session[3967]: pam_unix(sshd:session): session closed for user core May 13 10:01:31.780860 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:34760.service: Deactivated successfully. May 13 10:01:31.782487 systemd[1]: session-9.scope: Deactivated successfully. May 13 10:01:31.784142 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. May 13 10:01:31.785372 systemd-logind[1479]: Removed session 9. May 13 10:01:36.790500 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:56266.service - OpenSSH per-connection server daemon (10.0.0.1:56266). May 13 10:01:36.854156 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 56266 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:36.856758 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:36.862032 systemd-logind[1479]: New session 10 of user core. May 13 10:01:36.872707 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 10:01:36.984781 sshd[3987]: Connection closed by 10.0.0.1 port 56266 May 13 10:01:36.985194 sshd-session[3985]: pam_unix(sshd:session): session closed for user core May 13 10:01:36.988409 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:56266.service: Deactivated successfully. May 13 10:01:36.990085 systemd[1]: session-10.scope: Deactivated successfully. May 13 10:01:36.992328 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. May 13 10:01:36.993677 systemd-logind[1479]: Removed session 10. May 13 10:01:42.004161 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:56280.service - OpenSSH per-connection server daemon (10.0.0.1:56280). May 13 10:01:42.057375 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 56280 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:42.058200 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:42.063780 systemd-logind[1479]: New session 11 of user core. May 13 10:01:42.067737 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 10:01:42.182589 sshd[4005]: Connection closed by 10.0.0.1 port 56280 May 13 10:01:42.182995 sshd-session[4003]: pam_unix(sshd:session): session closed for user core May 13 10:01:42.192665 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:56280.service: Deactivated successfully. May 13 10:01:42.194405 systemd[1]: session-11.scope: Deactivated successfully. May 13 10:01:42.195153 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. May 13 10:01:42.197847 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:56286.service - OpenSSH per-connection server daemon (10.0.0.1:56286). May 13 10:01:42.198379 systemd-logind[1479]: Removed session 11. May 13 10:01:42.259257 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 56286 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:42.260676 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:42.264925 systemd-logind[1479]: New session 12 of user core. May 13 10:01:42.272683 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 10:01:42.421239 sshd[4021]: Connection closed by 10.0.0.1 port 56286 May 13 10:01:42.422149 sshd-session[4019]: pam_unix(sshd:session): session closed for user core May 13 10:01:42.433099 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:56286.service: Deactivated successfully. May 13 10:01:42.437973 systemd[1]: session-12.scope: Deactivated successfully. May 13 10:01:42.441755 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. May 13 10:01:42.451864 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:56294.service - OpenSSH per-connection server daemon (10.0.0.1:56294). May 13 10:01:42.454175 systemd-logind[1479]: Removed session 12. May 13 10:01:42.511019 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 56294 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:42.512078 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:42.515815 systemd-logind[1479]: New session 13 of user core. May 13 10:01:42.525661 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 10:01:42.635143 sshd[4035]: Connection closed by 10.0.0.1 port 56294 May 13 10:01:42.635644 sshd-session[4033]: pam_unix(sshd:session): session closed for user core May 13 10:01:42.638963 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:56294.service: Deactivated successfully. May 13 10:01:42.640631 systemd[1]: session-13.scope: Deactivated successfully. May 13 10:01:42.641595 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. May 13 10:01:42.642776 systemd-logind[1479]: Removed session 13. May 13 10:01:47.646934 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:32982.service - OpenSSH per-connection server daemon (10.0.0.1:32982). May 13 10:01:47.702627 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 32982 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:47.703780 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:47.708377 systemd-logind[1479]: New session 14 of user core. May 13 10:01:47.718663 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 10:01:47.833821 sshd[4052]: Connection closed by 10.0.0.1 port 32982 May 13 10:01:47.834130 sshd-session[4050]: pam_unix(sshd:session): session closed for user core May 13 10:01:47.836771 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:32982.service: Deactivated successfully. May 13 10:01:47.838626 systemd[1]: session-14.scope: Deactivated successfully. May 13 10:01:47.840986 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. May 13 10:01:47.842020 systemd-logind[1479]: Removed session 14. May 13 10:01:52.850145 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:36750.service - OpenSSH per-connection server daemon (10.0.0.1:36750). May 13 10:01:52.894677 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 36750 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:52.895947 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:52.900560 systemd-logind[1479]: New session 15 of user core. May 13 10:01:52.911727 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 10:01:53.022619 sshd[4067]: Connection closed by 10.0.0.1 port 36750 May 13 10:01:53.023278 sshd-session[4065]: pam_unix(sshd:session): session closed for user core May 13 10:01:53.032488 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:36750.service: Deactivated successfully. May 13 10:01:53.034061 systemd[1]: session-15.scope: Deactivated successfully. May 13 10:01:53.034758 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. May 13 10:01:53.037183 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:36756.service - OpenSSH per-connection server daemon (10.0.0.1:36756). May 13 10:01:53.037708 systemd-logind[1479]: Removed session 15. May 13 10:01:53.090240 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 36756 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:53.091360 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:53.095898 systemd-logind[1479]: New session 16 of user core. May 13 10:01:53.104653 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 10:01:53.380627 sshd[4082]: Connection closed by 10.0.0.1 port 36756 May 13 10:01:53.381326 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 13 10:01:53.393843 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:36756.service: Deactivated successfully. May 13 10:01:53.395862 systemd[1]: session-16.scope: Deactivated successfully. May 13 10:01:53.398276 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. May 13 10:01:53.400824 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:36766.service - OpenSSH per-connection server daemon (10.0.0.1:36766). May 13 10:01:53.401712 systemd-logind[1479]: Removed session 16. May 13 10:01:53.459653 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 36766 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:53.460980 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:53.464956 systemd-logind[1479]: New session 17 of user core. May 13 10:01:53.481684 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 10:01:54.240538 sshd[4096]: Connection closed by 10.0.0.1 port 36766 May 13 10:01:54.241005 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 13 10:01:54.254751 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:36766.service: Deactivated successfully. May 13 10:01:54.258153 systemd[1]: session-17.scope: Deactivated successfully. May 13 10:01:54.260830 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. May 13 10:01:54.266658 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:36780.service - OpenSSH per-connection server daemon (10.0.0.1:36780). May 13 10:01:54.269769 systemd-logind[1479]: Removed session 17. May 13 10:01:54.323142 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 36780 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:54.324282 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:54.328055 systemd-logind[1479]: New session 18 of user core. May 13 10:01:54.337733 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 10:01:54.556875 sshd[4117]: Connection closed by 10.0.0.1 port 36780 May 13 10:01:54.557662 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 13 10:01:54.571411 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:36780.service: Deactivated successfully. May 13 10:01:54.573271 systemd[1]: session-18.scope: Deactivated successfully. May 13 10:01:54.577300 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. May 13 10:01:54.577493 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:36784.service - OpenSSH per-connection server daemon (10.0.0.1:36784). May 13 10:01:54.580124 systemd-logind[1479]: Removed session 18. May 13 10:01:54.634568 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 36784 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:54.635061 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:54.639666 systemd-logind[1479]: New session 19 of user core. May 13 10:01:54.655664 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 10:01:54.781563 sshd[4131]: Connection closed by 10.0.0.1 port 36784 May 13 10:01:54.781676 sshd-session[4129]: pam_unix(sshd:session): session closed for user core May 13 10:01:54.785158 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:36784.service: Deactivated successfully. May 13 10:01:54.786745 systemd[1]: session-19.scope: Deactivated successfully. May 13 10:01:54.788793 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. May 13 10:01:54.790349 systemd-logind[1479]: Removed session 19. May 13 10:01:59.805496 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:36792.service - OpenSSH per-connection server daemon (10.0.0.1:36792). May 13 10:01:59.857267 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 36792 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:01:59.859006 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:01:59.862648 systemd-logind[1479]: New session 20 of user core. May 13 10:01:59.873898 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 10:01:59.983209 sshd[4152]: Connection closed by 10.0.0.1 port 36792 May 13 10:01:59.983556 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 13 10:01:59.986578 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:36792.service: Deactivated successfully. May 13 10:01:59.990284 systemd[1]: session-20.scope: Deactivated successfully. May 13 10:01:59.992808 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. May 13 10:01:59.994306 systemd-logind[1479]: Removed session 20. May 13 10:02:04.998909 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:39604.service - OpenSSH per-connection server daemon (10.0.0.1:39604). May 13 10:02:05.054582 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 39604 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:05.055892 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:05.060091 systemd-logind[1479]: New session 21 of user core. May 13 10:02:05.069668 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 10:02:05.184127 sshd[4168]: Connection closed by 10.0.0.1 port 39604 May 13 10:02:05.185261 sshd-session[4166]: pam_unix(sshd:session): session closed for user core May 13 10:02:05.189199 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:39604.service: Deactivated successfully. May 13 10:02:05.190855 systemd[1]: session-21.scope: Deactivated successfully. May 13 10:02:05.191766 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. May 13 10:02:05.192718 systemd-logind[1479]: Removed session 21. May 13 10:02:09.428907 kubelet[2613]: E0513 10:02:09.428865 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:10.195801 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:39612.service - OpenSSH per-connection server daemon (10.0.0.1:39612). May 13 10:02:10.239789 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 39612 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:10.240953 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:10.244588 systemd-logind[1479]: New session 22 of user core. May 13 10:02:10.256658 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 10:02:10.365249 sshd[4186]: Connection closed by 10.0.0.1 port 39612 May 13 10:02:10.365586 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 13 10:02:10.377614 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:39612.service: Deactivated successfully. May 13 10:02:10.379574 systemd[1]: session-22.scope: Deactivated successfully. May 13 10:02:10.380572 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. May 13 10:02:10.382721 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:39616.service - OpenSSH per-connection server daemon (10.0.0.1:39616). May 13 10:02:10.383710 systemd-logind[1479]: Removed session 22. May 13 10:02:10.446031 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 39616 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:10.447154 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:10.451441 systemd-logind[1479]: New session 23 of user core. May 13 10:02:10.458686 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 10:02:12.359598 containerd[1493]: time="2025-05-13T10:02:12.359361417Z" level=info msg="StopContainer for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" with timeout 30 (s)" May 13 10:02:12.360664 containerd[1493]: time="2025-05-13T10:02:12.360593556Z" level=info msg="Stop container \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" with signal terminated" May 13 10:02:12.370111 systemd[1]: cri-containerd-67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67.scope: Deactivated successfully. May 13 10:02:12.372177 containerd[1493]: time="2025-05-13T10:02:12.372144595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" id:\"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" pid:3180 exited_at:{seconds:1747130532 nanos:371736722}" May 13 10:02:12.372332 containerd[1493]: time="2025-05-13T10:02:12.372229153Z" level=info msg="received exit event container_id:\"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" id:\"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" pid:3180 exited_at:{seconds:1747130532 nanos:371736722}" May 13 10:02:12.391300 containerd[1493]: time="2025-05-13T10:02:12.391246103Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 10:02:12.396008 containerd[1493]: time="2025-05-13T10:02:12.395972261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" id:\"85accd4853144630184ffa2a1ecc4ad2b641db3606549680822424690bd8b885\" pid:4235 exited_at:{seconds:1747130532 nanos:395372431}" May 13 10:02:12.396012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67-rootfs.mount: Deactivated successfully. May 13 10:02:12.398457 containerd[1493]: time="2025-05-13T10:02:12.398433818Z" level=info msg="StopContainer for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" with timeout 2 (s)" May 13 10:02:12.398827 containerd[1493]: time="2025-05-13T10:02:12.398804492Z" level=info msg="Stop container \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" with signal terminated" May 13 10:02:12.404694 systemd-networkd[1434]: lxc_health: Link DOWN May 13 10:02:12.404700 systemd-networkd[1434]: lxc_health: Lost carrier May 13 10:02:12.405435 containerd[1493]: time="2025-05-13T10:02:12.405401057Z" level=info msg="StopContainer for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" returns successfully" May 13 10:02:12.408989 containerd[1493]: time="2025-05-13T10:02:12.408945875Z" level=info msg="StopPodSandbox for \"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\"" May 13 10:02:12.413310 containerd[1493]: time="2025-05-13T10:02:12.413273200Z" level=info msg="Container to stop \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:02:12.422393 systemd[1]: cri-containerd-b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8.scope: Deactivated successfully. May 13 10:02:12.424256 systemd[1]: cri-containerd-b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8.scope: Consumed 6.384s CPU time, 124.9M memory peak, 144K read from disk, 12.9M written to disk. May 13 10:02:12.424976 containerd[1493]: time="2025-05-13T10:02:12.424945197Z" level=info msg="received exit event container_id:\"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" id:\"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" pid:3268 exited_at:{seconds:1747130532 nanos:424643442}" May 13 10:02:12.425595 containerd[1493]: time="2025-05-13T10:02:12.425237152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" id:\"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" pid:3268 exited_at:{seconds:1747130532 nanos:424643442}" May 13 10:02:12.425383 systemd[1]: cri-containerd-59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95.scope: Deactivated successfully. May 13 10:02:12.427504 containerd[1493]: time="2025-05-13T10:02:12.427462473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" id:\"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" pid:2775 exit_status:137 exited_at:{seconds:1747130532 nanos:427045801}" May 13 10:02:12.450425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8-rootfs.mount: Deactivated successfully. May 13 10:02:12.456573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95-rootfs.mount: Deactivated successfully. May 13 10:02:12.459422 containerd[1493]: time="2025-05-13T10:02:12.459295680Z" level=info msg="shim disconnected" id=59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95 namespace=k8s.io May 13 10:02:12.461109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95-shm.mount: Deactivated successfully. May 13 10:02:12.464980 containerd[1493]: time="2025-05-13T10:02:12.459628474Z" level=warning msg="cleaning up after shim disconnected" id=59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95 namespace=k8s.io May 13 10:02:12.464980 containerd[1493]: time="2025-05-13T10:02:12.464974741Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 10:02:12.465100 containerd[1493]: time="2025-05-13T10:02:12.459617234Z" level=info msg="received exit event sandbox_id:\"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" exit_status:137 exited_at:{seconds:1747130532 nanos:427045801}" May 13 10:02:12.465127 containerd[1493]: time="2025-05-13T10:02:12.461118488Z" level=info msg="TearDown network for sandbox \"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" successfully" May 13 10:02:12.465149 containerd[1493]: time="2025-05-13T10:02:12.465130419Z" level=info msg="StopPodSandbox for \"59a23483e0439e8e1314ae1755e8cdf54585f898b200a6e6bdd2724630601d95\" returns successfully" May 13 10:02:12.465570 containerd[1493]: time="2025-05-13T10:02:12.463999078Z" level=info msg="StopContainer for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" returns successfully" May 13 10:02:12.466040 containerd[1493]: time="2025-05-13T10:02:12.466013403Z" level=info msg="StopPodSandbox for \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\"" May 13 10:02:12.466093 containerd[1493]: time="2025-05-13T10:02:12.466072962Z" level=info msg="Container to stop \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:02:12.466093 containerd[1493]: time="2025-05-13T10:02:12.466084642Z" level=info msg="Container to stop \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:02:12.466140 containerd[1493]: time="2025-05-13T10:02:12.466092722Z" level=info msg="Container to stop \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:02:12.466140 containerd[1493]: time="2025-05-13T10:02:12.466101082Z" level=info msg="Container to stop \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:02:12.466140 containerd[1493]: time="2025-05-13T10:02:12.466109202Z" level=info msg="Container to stop \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 10:02:12.472382 systemd[1]: cri-containerd-6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81.scope: Deactivated successfully. May 13 10:02:12.494098 containerd[1493]: time="2025-05-13T10:02:12.494041636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" id:\"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" pid:2727 exit_status:137 exited_at:{seconds:1747130532 nanos:474627334}" May 13 10:02:12.495585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81-rootfs.mount: Deactivated successfully. May 13 10:02:12.500235 containerd[1493]: time="2025-05-13T10:02:12.500188729Z" level=info msg="shim disconnected" id=6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81 namespace=k8s.io May 13 10:02:12.500553 containerd[1493]: time="2025-05-13T10:02:12.500223329Z" level=warning msg="cleaning up after shim disconnected" id=6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81 namespace=k8s.io May 13 10:02:12.500553 containerd[1493]: time="2025-05-13T10:02:12.500266648Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 10:02:12.500553 containerd[1493]: time="2025-05-13T10:02:12.500437285Z" level=info msg="received exit event sandbox_id:\"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" exit_status:137 exited_at:{seconds:1747130532 nanos:474627334}" May 13 10:02:12.500641 containerd[1493]: time="2025-05-13T10:02:12.500599122Z" level=info msg="TearDown network for sandbox \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" successfully" May 13 10:02:12.500641 containerd[1493]: time="2025-05-13T10:02:12.500623202Z" level=info msg="StopPodSandbox for \"6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81\" returns successfully" May 13 10:02:12.613425 kubelet[2613]: I0513 10:02:12.613308 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cni-path\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.613425 kubelet[2613]: I0513 10:02:12.613357 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-hubble-tls\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.613425 kubelet[2613]: I0513 10:02:12.613383 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-bpf-maps\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.613425 kubelet[2613]: I0513 10:02:12.613400 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-etc-cni-netd\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614564 kubelet[2613]: I0513 10:02:12.614425 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-xtables-lock\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614564 kubelet[2613]: I0513 10:02:12.614460 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-lib-modules\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614564 kubelet[2613]: I0513 10:02:12.614456 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.614564 kubelet[2613]: I0513 10:02:12.614528 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.614564 kubelet[2613]: I0513 10:02:12.614547 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.614724 kubelet[2613]: I0513 10:02:12.614623 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d574392a-42d2-494a-9ad9-ba13b5d31061-cilium-config-path\") pod \"d574392a-42d2-494a-9ad9-ba13b5d31061\" (UID: \"d574392a-42d2-494a-9ad9-ba13b5d31061\") " May 13 10:02:12.614724 kubelet[2613]: I0513 10:02:12.614690 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-cgroup\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614724 kubelet[2613]: I0513 10:02:12.614713 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-hostproc\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614789 kubelet[2613]: I0513 10:02:12.614744 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-run\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614789 kubelet[2613]: I0513 10:02:12.614773 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf6gv\" (UniqueName: \"kubernetes.io/projected/d574392a-42d2-494a-9ad9-ba13b5d31061-kube-api-access-pf6gv\") pod \"d574392a-42d2-494a-9ad9-ba13b5d31061\" (UID: \"d574392a-42d2-494a-9ad9-ba13b5d31061\") " May 13 10:02:12.614830 kubelet[2613]: I0513 10:02:12.614794 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qjm\" (UniqueName: \"kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-kube-api-access-d6qjm\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614830 kubelet[2613]: I0513 10:02:12.614809 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-net\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614830 kubelet[2613]: I0513 10:02:12.614824 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-kernel\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614897 kubelet[2613]: I0513 10:02:12.614849 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-config-path\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614897 kubelet[2613]: I0513 10:02:12.614867 2613 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d296026-a34d-4f60-8999-0d54c59fa524-clustermesh-secrets\") pod \"3d296026-a34d-4f60-8999-0d54c59fa524\" (UID: \"3d296026-a34d-4f60-8999-0d54c59fa524\") " May 13 10:02:12.614938 kubelet[2613]: I0513 10:02:12.614904 2613 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.614938 kubelet[2613]: I0513 10:02:12.614921 2613 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.614938 kubelet[2613]: I0513 10:02:12.614930 2613 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.615390 kubelet[2613]: I0513 10:02:12.615032 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.615390 kubelet[2613]: I0513 10:02:12.615060 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.615390 kubelet[2613]: I0513 10:02:12.615102 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.616448 kubelet[2613]: I0513 10:02:12.616407 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.616528 kubelet[2613]: I0513 10:02:12.616468 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.616528 kubelet[2613]: I0513 10:02:12.616488 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.617069 kubelet[2613]: I0513 10:02:12.617041 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d574392a-42d2-494a-9ad9-ba13b5d31061-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d574392a-42d2-494a-9ad9-ba13b5d31061" (UID: "d574392a-42d2-494a-9ad9-ba13b5d31061"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 10:02:12.617202 kubelet[2613]: I0513 10:02:12.617187 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 10:02:12.619005 kubelet[2613]: I0513 10:02:12.618958 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 10:02:12.619120 kubelet[2613]: I0513 10:02:12.619078 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d574392a-42d2-494a-9ad9-ba13b5d31061-kube-api-access-pf6gv" (OuterVolumeSpecName: "kube-api-access-pf6gv") pod "d574392a-42d2-494a-9ad9-ba13b5d31061" (UID: "d574392a-42d2-494a-9ad9-ba13b5d31061"). InnerVolumeSpecName "kube-api-access-pf6gv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 10:02:12.619390 kubelet[2613]: I0513 10:02:12.619369 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 10:02:12.619608 kubelet[2613]: I0513 10:02:12.619573 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d296026-a34d-4f60-8999-0d54c59fa524-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 10:02:12.620071 kubelet[2613]: I0513 10:02:12.620043 2613 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-kube-api-access-d6qjm" (OuterVolumeSpecName: "kube-api-access-d6qjm") pod "3d296026-a34d-4f60-8999-0d54c59fa524" (UID: "3d296026-a34d-4f60-8999-0d54c59fa524"). InnerVolumeSpecName "kube-api-access-d6qjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 10:02:12.625315 kubelet[2613]: I0513 10:02:12.625282 2613 scope.go:117] "RemoveContainer" containerID="b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8" May 13 10:02:12.629500 containerd[1493]: time="2025-05-13T10:02:12.629470282Z" level=info msg="RemoveContainer for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\"" May 13 10:02:12.631233 systemd[1]: Removed slice kubepods-besteffort-podd574392a_42d2_494a_9ad9_ba13b5d31061.slice - libcontainer container kubepods-besteffort-podd574392a_42d2_494a_9ad9_ba13b5d31061.slice. May 13 10:02:12.632909 systemd[1]: Removed slice kubepods-burstable-pod3d296026_a34d_4f60_8999_0d54c59fa524.slice - libcontainer container kubepods-burstable-pod3d296026_a34d_4f60_8999_0d54c59fa524.slice. May 13 10:02:12.632996 systemd[1]: kubepods-burstable-pod3d296026_a34d_4f60_8999_0d54c59fa524.slice: Consumed 6.534s CPU time, 125.2M memory peak, 6.1M read from disk, 12.9M written to disk. May 13 10:02:12.652714 containerd[1493]: time="2025-05-13T10:02:12.652666279Z" level=info msg="RemoveContainer for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" returns successfully" May 13 10:02:12.653328 kubelet[2613]: I0513 10:02:12.653283 2613 scope.go:117] "RemoveContainer" containerID="bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5" May 13 10:02:12.656212 containerd[1493]: time="2025-05-13T10:02:12.655727906Z" level=info msg="RemoveContainer for \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\"" May 13 10:02:12.659258 containerd[1493]: time="2025-05-13T10:02:12.659225285Z" level=info msg="RemoveContainer for \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" returns successfully" May 13 10:02:12.659495 kubelet[2613]: I0513 10:02:12.659468 2613 scope.go:117] "RemoveContainer" containerID="957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b" May 13 10:02:12.661636 containerd[1493]: time="2025-05-13T10:02:12.661606603Z" level=info msg="RemoveContainer for \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\"" May 13 10:02:12.664954 containerd[1493]: time="2025-05-13T10:02:12.664920706Z" level=info msg="RemoveContainer for \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" returns successfully" May 13 10:02:12.665158 kubelet[2613]: I0513 10:02:12.665125 2613 scope.go:117] "RemoveContainer" containerID="bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6" May 13 10:02:12.679328 containerd[1493]: time="2025-05-13T10:02:12.679302136Z" level=info msg="RemoveContainer for \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\"" May 13 10:02:12.682230 containerd[1493]: time="2025-05-13T10:02:12.682192246Z" level=info msg="RemoveContainer for \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" returns successfully" May 13 10:02:12.682440 kubelet[2613]: I0513 10:02:12.682365 2613 scope.go:117] "RemoveContainer" containerID="ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05" May 13 10:02:12.683818 containerd[1493]: time="2025-05-13T10:02:12.683766138Z" level=info msg="RemoveContainer for \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\"" May 13 10:02:12.686331 containerd[1493]: time="2025-05-13T10:02:12.686299974Z" level=info msg="RemoveContainer for \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" returns successfully" May 13 10:02:12.686488 kubelet[2613]: I0513 10:02:12.686454 2613 scope.go:117] "RemoveContainer" containerID="b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8" May 13 10:02:12.686706 containerd[1493]: time="2025-05-13T10:02:12.686664328Z" level=error msg="ContainerStatus for \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\": not found" May 13 10:02:12.686819 kubelet[2613]: E0513 10:02:12.686795 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\": not found" containerID="b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8" May 13 10:02:12.686917 kubelet[2613]: I0513 10:02:12.686829 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8"} err="failed to get container status \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2e3acf449d32d48bc893571c7beb0181257a1ad3c10f9d40e3857c9935e8ec8\": not found" May 13 10:02:12.686917 kubelet[2613]: I0513 10:02:12.686910 2613 scope.go:117] "RemoveContainer" containerID="bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5" May 13 10:02:12.687159 containerd[1493]: time="2025-05-13T10:02:12.687112840Z" level=error msg="ContainerStatus for \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\": not found" May 13 10:02:12.687300 kubelet[2613]: E0513 10:02:12.687275 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\": not found" containerID="bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5" May 13 10:02:12.687336 kubelet[2613]: I0513 10:02:12.687306 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5"} err="failed to get container status \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\": rpc error: code = NotFound desc = an error occurred when try to find container \"bca4abd5a842fc869b9b3bc67ac17abe66ce7334bcff39c0ebf77338395dfba5\": not found" May 13 10:02:12.687336 kubelet[2613]: I0513 10:02:12.687327 2613 scope.go:117] "RemoveContainer" containerID="957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b" May 13 10:02:12.687600 containerd[1493]: time="2025-05-13T10:02:12.687495993Z" level=error msg="ContainerStatus for \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\": not found" May 13 10:02:12.687719 kubelet[2613]: E0513 10:02:12.687699 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\": not found" containerID="957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b" May 13 10:02:12.687795 kubelet[2613]: I0513 10:02:12.687778 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b"} err="failed to get container status \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\": rpc error: code = NotFound desc = an error occurred when try to find container \"957616a95c2c48ee2a268870085c291f1c439a0019395ed08042b7ce5bcb693b\": not found" May 13 10:02:12.687845 kubelet[2613]: I0513 10:02:12.687834 2613 scope.go:117] "RemoveContainer" containerID="bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6" May 13 10:02:12.688755 containerd[1493]: time="2025-05-13T10:02:12.688721292Z" level=error msg="ContainerStatus for \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\": not found" May 13 10:02:12.688932 kubelet[2613]: E0513 10:02:12.688908 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\": not found" containerID="bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6" May 13 10:02:12.688972 kubelet[2613]: I0513 10:02:12.688936 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6"} err="failed to get container status \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd4abc1bcf9951aaf5e883a9ab6e51676b369f1cf604ee59e27dd375a16ac1b6\": not found" May 13 10:02:12.688972 kubelet[2613]: I0513 10:02:12.688953 2613 scope.go:117] "RemoveContainer" containerID="ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05" May 13 10:02:12.689213 containerd[1493]: time="2025-05-13T10:02:12.689182804Z" level=error msg="ContainerStatus for \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\": not found" May 13 10:02:12.689449 kubelet[2613]: E0513 10:02:12.689344 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\": not found" containerID="ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05" May 13 10:02:12.689449 kubelet[2613]: I0513 10:02:12.689369 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05"} err="failed to get container status \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea0e275ddfb39d9b194e7f6db5877babda3e2b0b23d49ca4f88f2d21992d3d05\": not found" May 13 10:02:12.689449 kubelet[2613]: I0513 10:02:12.689384 2613 scope.go:117] "RemoveContainer" containerID="67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67" May 13 10:02:12.690768 containerd[1493]: time="2025-05-13T10:02:12.690743337Z" level=info msg="RemoveContainer for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\"" May 13 10:02:12.693371 containerd[1493]: time="2025-05-13T10:02:12.693331932Z" level=info msg="RemoveContainer for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" returns successfully" May 13 10:02:12.693563 kubelet[2613]: I0513 10:02:12.693528 2613 scope.go:117] "RemoveContainer" containerID="67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67" May 13 10:02:12.693761 containerd[1493]: time="2025-05-13T10:02:12.693716765Z" level=error msg="ContainerStatus for \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\": not found" May 13 10:02:12.693869 kubelet[2613]: E0513 10:02:12.693848 2613 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\": not found" containerID="67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67" May 13 10:02:12.693910 kubelet[2613]: I0513 10:02:12.693876 2613 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67"} err="failed to get container status \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\": rpc error: code = NotFound desc = an error occurred when try to find container \"67515922131676181a20a1f81b1d248c2d9e41a225fd7d003f150de9e33ddd67\": not found" May 13 10:02:12.715143 kubelet[2613]: I0513 10:02:12.715079 2613 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715143 kubelet[2613]: I0513 10:02:12.715115 2613 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715143 kubelet[2613]: I0513 10:02:12.715132 2613 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715143 kubelet[2613]: I0513 10:02:12.715149 2613 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715165 2613 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715181 2613 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d574392a-42d2-494a-9ad9-ba13b5d31061-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715195 2613 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715208 2613 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pf6gv\" (UniqueName: \"kubernetes.io/projected/d574392a-42d2-494a-9ad9-ba13b5d31061-kube-api-access-pf6gv\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715222 2613 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6qjm\" (UniqueName: \"kubernetes.io/projected/3d296026-a34d-4f60-8999-0d54c59fa524-kube-api-access-d6qjm\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715236 2613 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715248 2613 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d296026-a34d-4f60-8999-0d54c59fa524-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715321 kubelet[2613]: I0513 10:02:12.715254 2613 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d296026-a34d-4f60-8999-0d54c59fa524-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 10:02:12.715490 kubelet[2613]: I0513 10:02:12.715262 2613 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d296026-a34d-4f60-8999-0d54c59fa524-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 10:02:13.395060 systemd[1]: var-lib-kubelet-pods-d574392a\x2d42d2\x2d494a\x2d9ad9\x2dba13b5d31061-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpf6gv.mount: Deactivated successfully. May 13 10:02:13.395166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6864f5492b2c17b236a17b49c9d630823300e4b373b7ef4b4e3f59fe8cbe8e81-shm.mount: Deactivated successfully. May 13 10:02:13.395217 systemd[1]: var-lib-kubelet-pods-3d296026\x2da34d\x2d4f60\x2d8999\x2d0d54c59fa524-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6qjm.mount: Deactivated successfully. May 13 10:02:13.395266 systemd[1]: var-lib-kubelet-pods-3d296026\x2da34d\x2d4f60\x2d8999\x2d0d54c59fa524-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 10:02:13.395326 systemd[1]: var-lib-kubelet-pods-3d296026\x2da34d\x2d4f60\x2d8999\x2d0d54c59fa524-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 10:02:13.480665 kubelet[2613]: E0513 10:02:13.480619 2613 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 10:02:14.324078 sshd[4201]: Connection closed by 10.0.0.1 port 39616 May 13 10:02:14.324654 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 13 10:02:14.336705 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:39616.service: Deactivated successfully. May 13 10:02:14.338238 systemd[1]: session-23.scope: Deactivated successfully. May 13 10:02:14.338417 systemd[1]: session-23.scope: Consumed 1.252s CPU time, 23.8M memory peak. May 13 10:02:14.339033 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. May 13 10:02:14.342024 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:37292.service - OpenSSH per-connection server daemon (10.0.0.1:37292). May 13 10:02:14.342742 systemd-logind[1479]: Removed session 23. May 13 10:02:14.395632 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 37292 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:14.396995 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:14.400581 systemd-logind[1479]: New session 24 of user core. May 13 10:02:14.406640 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 10:02:14.425979 kubelet[2613]: E0513 10:02:14.425936 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:14.428739 kubelet[2613]: I0513 10:02:14.428696 2613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d296026-a34d-4f60-8999-0d54c59fa524" path="/var/lib/kubelet/pods/3d296026-a34d-4f60-8999-0d54c59fa524/volumes" May 13 10:02:14.429218 kubelet[2613]: I0513 10:02:14.429200 2613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d574392a-42d2-494a-9ad9-ba13b5d31061" path="/var/lib/kubelet/pods/d574392a-42d2-494a-9ad9-ba13b5d31061/volumes" May 13 10:02:15.486548 sshd[4355]: Connection closed by 10.0.0.1 port 37292 May 13 10:02:15.485722 sshd-session[4353]: pam_unix(sshd:session): session closed for user core May 13 10:02:15.496175 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:37292.service: Deactivated successfully. May 13 10:02:15.497972 systemd[1]: session-24.scope: Deactivated successfully. May 13 10:02:15.498172 systemd[1]: session-24.scope: Consumed 1.001s CPU time, 23.6M memory peak. May 13 10:02:15.499259 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. May 13 10:02:15.503797 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:37296.service - OpenSSH per-connection server daemon (10.0.0.1:37296). May 13 10:02:15.505696 systemd-logind[1479]: Removed session 24. May 13 10:02:15.529098 kubelet[2613]: I0513 10:02:15.528879 2613 memory_manager.go:355] "RemoveStaleState removing state" podUID="3d296026-a34d-4f60-8999-0d54c59fa524" containerName="cilium-agent" May 13 10:02:15.529098 kubelet[2613]: I0513 10:02:15.529092 2613 memory_manager.go:355] "RemoveStaleState removing state" podUID="d574392a-42d2-494a-9ad9-ba13b5d31061" containerName="cilium-operator" May 13 10:02:15.540544 systemd[1]: Created slice kubepods-burstable-podce9b26cf_4613_4883_978d_77a35e64369c.slice - libcontainer container kubepods-burstable-podce9b26cf_4613_4883_978d_77a35e64369c.slice. May 13 10:02:15.560598 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 37296 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:15.563091 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:15.569610 systemd-logind[1479]: New session 25 of user core. May 13 10:02:15.578724 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 10:02:15.629429 sshd[4369]: Connection closed by 10.0.0.1 port 37296 May 13 10:02:15.629892 sshd-session[4367]: pam_unix(sshd:session): session closed for user core May 13 10:02:15.633855 kubelet[2613]: I0513 10:02:15.633831 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-hostproc\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.633948 kubelet[2613]: I0513 10:02:15.633867 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce9b26cf-4613-4883-978d-77a35e64369c-cilium-ipsec-secrets\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.633948 kubelet[2613]: I0513 10:02:15.633886 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-cilium-cgroup\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634081 kubelet[2613]: I0513 10:02:15.634064 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-lib-modules\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634110 kubelet[2613]: I0513 10:02:15.634094 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-host-proc-sys-net\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634185 kubelet[2613]: I0513 10:02:15.634133 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-bpf-maps\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634292 kubelet[2613]: I0513 10:02:15.634272 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce9b26cf-4613-4883-978d-77a35e64369c-hubble-tls\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634322 kubelet[2613]: I0513 10:02:15.634313 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-cilium-run\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634349 kubelet[2613]: I0513 10:02:15.634333 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-cni-path\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634370 kubelet[2613]: I0513 10:02:15.634349 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce9b26cf-4613-4883-978d-77a35e64369c-cilium-config-path\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634370 kubelet[2613]: I0513 10:02:15.634367 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-etc-cni-netd\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634410 kubelet[2613]: I0513 10:02:15.634382 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-host-proc-sys-kernel\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634410 kubelet[2613]: I0513 10:02:15.634397 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hrq\" (UniqueName: \"kubernetes.io/projected/ce9b26cf-4613-4883-978d-77a35e64369c-kube-api-access-t6hrq\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634450 kubelet[2613]: I0513 10:02:15.634415 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce9b26cf-4613-4883-978d-77a35e64369c-xtables-lock\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.634450 kubelet[2613]: I0513 10:02:15.634430 2613 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce9b26cf-4613-4883-978d-77a35e64369c-clustermesh-secrets\") pod \"cilium-gjgjs\" (UID: \"ce9b26cf-4613-4883-978d-77a35e64369c\") " pod="kube-system/cilium-gjgjs" May 13 10:02:15.642606 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:37296.service: Deactivated successfully. May 13 10:02:15.644804 systemd[1]: session-25.scope: Deactivated successfully. May 13 10:02:15.645687 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. May 13 10:02:15.648228 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:37308.service - OpenSSH per-connection server daemon (10.0.0.1:37308). May 13 10:02:15.649110 systemd-logind[1479]: Removed session 25. May 13 10:02:15.698398 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 37308 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:15.699587 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:15.703792 systemd-logind[1479]: New session 26 of user core. May 13 10:02:15.725682 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 10:02:15.844015 kubelet[2613]: E0513 10:02:15.843901 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:15.844702 containerd[1493]: time="2025-05-13T10:02:15.844506753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjgjs,Uid:ce9b26cf-4613-4883-978d-77a35e64369c,Namespace:kube-system,Attempt:0,}" May 13 10:02:15.864743 containerd[1493]: time="2025-05-13T10:02:15.864695042Z" level=info msg="connecting to shim b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209" address="unix:///run/containerd/s/fbd82bfcc8d3831b720273d825f287d4989c8fe2b0201b783830d2917f75bd44" namespace=k8s.io protocol=ttrpc version=3 May 13 10:02:15.884699 systemd[1]: Started cri-containerd-b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209.scope - libcontainer container b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209. May 13 10:02:15.910915 containerd[1493]: time="2025-05-13T10:02:15.910870183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjgjs,Uid:ce9b26cf-4613-4883-978d-77a35e64369c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\"" May 13 10:02:15.911876 kubelet[2613]: E0513 10:02:15.911847 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:15.914565 containerd[1493]: time="2025-05-13T10:02:15.914532933Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 10:02:15.921405 containerd[1493]: time="2025-05-13T10:02:15.919825822Z" level=info msg="Container a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:15.930636 containerd[1493]: time="2025-05-13T10:02:15.930458280Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\"" May 13 10:02:15.931557 containerd[1493]: time="2025-05-13T10:02:15.931193990Z" level=info msg="StartContainer for \"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\"" May 13 10:02:15.933531 containerd[1493]: time="2025-05-13T10:02:15.933475719Z" level=info msg="connecting to shim a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e" address="unix:///run/containerd/s/fbd82bfcc8d3831b720273d825f287d4989c8fe2b0201b783830d2917f75bd44" protocol=ttrpc version=3 May 13 10:02:15.953687 systemd[1]: Started cri-containerd-a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e.scope - libcontainer container a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e. May 13 10:02:15.977770 containerd[1493]: time="2025-05-13T10:02:15.977733606Z" level=info msg="StartContainer for \"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\" returns successfully" May 13 10:02:16.005771 systemd[1]: cri-containerd-a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e.scope: Deactivated successfully. May 13 10:02:16.007015 containerd[1493]: time="2025-05-13T10:02:16.006974299Z" level=info msg="received exit event container_id:\"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\" id:\"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\" pid:4449 exited_at:{seconds:1747130536 nanos:6727902}" May 13 10:02:16.007920 containerd[1493]: time="2025-05-13T10:02:16.007827169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\" id:\"a6d872c63aed9944192aa802ea999365549c833f65a69a58b74203ab7b5c321e\" pid:4449 exited_at:{seconds:1747130536 nanos:6727902}" May 13 10:02:16.638382 kubelet[2613]: E0513 10:02:16.638250 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:16.642027 containerd[1493]: time="2025-05-13T10:02:16.641962771Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 10:02:16.649567 containerd[1493]: time="2025-05-13T10:02:16.649369241Z" level=info msg="Container c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:16.654627 containerd[1493]: time="2025-05-13T10:02:16.654574578Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\"" May 13 10:02:16.655800 containerd[1493]: time="2025-05-13T10:02:16.655767403Z" level=info msg="StartContainer for \"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\"" May 13 10:02:16.657203 containerd[1493]: time="2025-05-13T10:02:16.657144586Z" level=info msg="connecting to shim c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd" address="unix:///run/containerd/s/fbd82bfcc8d3831b720273d825f287d4989c8fe2b0201b783830d2917f75bd44" protocol=ttrpc version=3 May 13 10:02:16.678687 systemd[1]: Started cri-containerd-c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd.scope - libcontainer container c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd. May 13 10:02:16.702419 containerd[1493]: time="2025-05-13T10:02:16.702367916Z" level=info msg="StartContainer for \"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\" returns successfully" May 13 10:02:16.710373 systemd[1]: cri-containerd-c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd.scope: Deactivated successfully. May 13 10:02:16.711320 containerd[1493]: time="2025-05-13T10:02:16.710950771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\" id:\"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\" pid:4500 exited_at:{seconds:1747130536 nanos:710473737}" May 13 10:02:16.711611 containerd[1493]: time="2025-05-13T10:02:16.711541364Z" level=info msg="received exit event container_id:\"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\" id:\"c650fc635909493106e810f0534839c0362b65ea5a98b73cc9582119995980dd\" pid:4500 exited_at:{seconds:1747130536 nanos:710473737}" May 13 10:02:17.642216 kubelet[2613]: E0513 10:02:17.642172 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:17.646117 containerd[1493]: time="2025-05-13T10:02:17.645764210Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 10:02:17.655782 containerd[1493]: time="2025-05-13T10:02:17.655749341Z" level=info msg="Container 13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:17.664140 containerd[1493]: time="2025-05-13T10:02:17.664096169Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\"" May 13 10:02:17.665131 containerd[1493]: time="2025-05-13T10:02:17.664555244Z" level=info msg="StartContainer for \"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\"" May 13 10:02:17.665794 containerd[1493]: time="2025-05-13T10:02:17.665760911Z" level=info msg="connecting to shim 13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5" address="unix:///run/containerd/s/fbd82bfcc8d3831b720273d825f287d4989c8fe2b0201b783830d2917f75bd44" protocol=ttrpc version=3 May 13 10:02:17.685657 systemd[1]: Started cri-containerd-13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5.scope - libcontainer container 13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5. May 13 10:02:17.714873 containerd[1493]: time="2025-05-13T10:02:17.714834453Z" level=info msg="StartContainer for \"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\" returns successfully" May 13 10:02:17.715405 systemd[1]: cri-containerd-13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5.scope: Deactivated successfully. May 13 10:02:17.716890 containerd[1493]: time="2025-05-13T10:02:17.716862030Z" level=info msg="received exit event container_id:\"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\" id:\"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\" pid:4546 exited_at:{seconds:1747130537 nanos:716690152}" May 13 10:02:17.716980 containerd[1493]: time="2025-05-13T10:02:17.716937669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\" id:\"13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5\" pid:4546 exited_at:{seconds:1747130537 nanos:716690152}" May 13 10:02:17.736249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a1bbb7d894d5076c367ce5ce0fcd554177465c9c08067b049ccef72f05e7d5-rootfs.mount: Deactivated successfully. May 13 10:02:18.482175 kubelet[2613]: E0513 10:02:18.482133 2613 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 10:02:18.650535 kubelet[2613]: E0513 10:02:18.650483 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:18.653427 containerd[1493]: time="2025-05-13T10:02:18.653299280Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 10:02:18.673800 containerd[1493]: time="2025-05-13T10:02:18.673750079Z" level=info msg="Container b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:18.676336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844630051.mount: Deactivated successfully. May 13 10:02:18.678654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656176175.mount: Deactivated successfully. May 13 10:02:18.683369 containerd[1493]: time="2025-05-13T10:02:18.683324985Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\"" May 13 10:02:18.683868 containerd[1493]: time="2025-05-13T10:02:18.683846340Z" level=info msg="StartContainer for \"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\"" May 13 10:02:18.684696 containerd[1493]: time="2025-05-13T10:02:18.684668212Z" level=info msg="connecting to shim b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec" address="unix:///run/containerd/s/fbd82bfcc8d3831b720273d825f287d4989c8fe2b0201b783830d2917f75bd44" protocol=ttrpc version=3 May 13 10:02:18.705669 systemd[1]: Started cri-containerd-b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec.scope - libcontainer container b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec. May 13 10:02:18.739097 systemd[1]: cri-containerd-b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec.scope: Deactivated successfully. May 13 10:02:18.739755 containerd[1493]: time="2025-05-13T10:02:18.739438195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\" id:\"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\" pid:4585 exited_at:{seconds:1747130538 nanos:739220518}" May 13 10:02:18.740762 containerd[1493]: time="2025-05-13T10:02:18.740657303Z" level=info msg="received exit event container_id:\"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\" id:\"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\" pid:4585 exited_at:{seconds:1747130538 nanos:739220518}" May 13 10:02:18.746849 containerd[1493]: time="2025-05-13T10:02:18.746792763Z" level=info msg="StartContainer for \"b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec\" returns successfully" May 13 10:02:18.757942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b78c382a5a5a3a7da7cc5a65eca7f955f2b314ee9e8436dd37765df3822e7bec-rootfs.mount: Deactivated successfully. May 13 10:02:19.656357 kubelet[2613]: E0513 10:02:19.656319 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:19.660545 containerd[1493]: time="2025-05-13T10:02:19.660481666Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 10:02:19.689135 containerd[1493]: time="2025-05-13T10:02:19.689082058Z" level=info msg="Container 23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b: CDI devices from CRI Config.CDIDevices: []" May 13 10:02:19.695706 containerd[1493]: time="2025-05-13T10:02:19.695623922Z" level=info msg="CreateContainer within sandbox \"b225afc65615fa356a5ff729f0137f97ca53c105306f73a05194798a83331209\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\"" May 13 10:02:19.696157 containerd[1493]: time="2025-05-13T10:02:19.696117717Z" level=info msg="StartContainer for \"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\"" May 13 10:02:19.697071 containerd[1493]: time="2025-05-13T10:02:19.697031229Z" level=info msg="connecting to shim 23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b" address="unix:///run/containerd/s/fbd82bfcc8d3831b720273d825f287d4989c8fe2b0201b783830d2917f75bd44" protocol=ttrpc version=3 May 13 10:02:19.724670 systemd[1]: Started cri-containerd-23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b.scope - libcontainer container 23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b. May 13 10:02:19.751094 containerd[1493]: time="2025-05-13T10:02:19.751056001Z" level=info msg="StartContainer for \"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\" returns successfully" May 13 10:02:19.807709 containerd[1493]: time="2025-05-13T10:02:19.807660470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\" id:\"506e2db5e55365498b2c2189bcb19ff87a14e60e688eae9e7ae4d720d1e17306\" pid:4654 exited_at:{seconds:1747130539 nanos:807320353}" May 13 10:02:20.032602 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 10:02:20.481211 kubelet[2613]: I0513 10:02:20.481167 2613 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T10:02:20Z","lastTransitionTime":"2025-05-13T10:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 10:02:20.665016 kubelet[2613]: E0513 10:02:20.664683 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:20.679186 kubelet[2613]: I0513 10:02:20.679121 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gjgjs" podStartSLOduration=5.679104132 podStartE2EDuration="5.679104132s" podCreationTimestamp="2025-05-13 10:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:02:20.678263619 +0000 UTC m=+82.329920648" watchObservedRunningTime="2025-05-13 10:02:20.679104132 +0000 UTC m=+82.330761201" May 13 10:02:21.845234 kubelet[2613]: E0513 10:02:21.845182 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:22.056525 containerd[1493]: time="2025-05-13T10:02:22.056462104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\" id:\"9e979850a57d7c73a7385fc9e3a5e2e939246c21cb77c83db9b96f0af10cce27\" pid:4928 exit_status:1 exited_at:{seconds:1747130542 nanos:56134346}" May 13 10:02:22.896593 systemd-networkd[1434]: lxc_health: Link UP May 13 10:02:22.904048 systemd-networkd[1434]: lxc_health: Gained carrier May 13 10:02:23.845677 kubelet[2613]: E0513 10:02:23.845497 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:24.178436 containerd[1493]: time="2025-05-13T10:02:24.178393185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\" id:\"6c2decf4900a6137c602285ca558944341091a61d511e3e8394fff3adc2539d0\" pid:5187 exited_at:{seconds:1747130544 nanos:178127666}" May 13 10:02:24.390083 systemd-networkd[1434]: lxc_health: Gained IPv6LL May 13 10:02:24.425639 kubelet[2613]: E0513 10:02:24.425610 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:24.673034 kubelet[2613]: E0513 10:02:24.673002 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:25.425778 kubelet[2613]: E0513 10:02:25.425721 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:25.674542 kubelet[2613]: E0513 10:02:25.674464 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:26.346104 containerd[1493]: time="2025-05-13T10:02:26.346067483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\" id:\"47b75d4315003dd02a8133ef2e544bc1b03ee5fb510f66a9a80d6abfa6608239\" pid:5217 exited_at:{seconds:1747130546 nanos:345579444}" May 13 10:02:26.427023 kubelet[2613]: E0513 10:02:26.426985 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 10:02:28.445444 containerd[1493]: time="2025-05-13T10:02:28.445402513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23db6ab7f2d6a4c1fee9494ce9452679f03c6ad57158688d6f56f2c6837d9a4b\" id:\"0b9c3e50716915d6282152402536ab0759415184a2bac8b1988c237ee49ef70d\" pid:5246 exited_at:{seconds:1747130548 nanos:444960633}" May 13 10:02:28.450212 sshd[4378]: Connection closed by 10.0.0.1 port 37308 May 13 10:02:28.450640 sshd-session[4376]: pam_unix(sshd:session): session closed for user core May 13 10:02:28.454273 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:37308.service: Deactivated successfully. May 13 10:02:28.456402 systemd[1]: session-26.scope: Deactivated successfully. May 13 10:02:28.457325 systemd-logind[1479]: Session 26 logged out. Waiting for processes to exit. May 13 10:02:28.458611 systemd-logind[1479]: Removed session 26.