Apr 17 23:24:49.859616 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:24:49.859633 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:24:49.859642 kernel: BIOS-provided physical RAM map: Apr 17 23:24:49.859646 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:24:49.859650 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 23:24:49.859654 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 23:24:49.859660 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 23:24:49.859664 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 23:24:49.859668 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 17 23:24:49.859672 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 17 23:24:49.859678 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 17 23:24:49.859683 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 17 23:24:49.859687 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 17 23:24:49.859691 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 17 23:24:49.859697 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 17 23:24:49.859701 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 23:24:49.859707 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 17 23:24:49.859711 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 17 23:24:49.859716 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 23:24:49.859720 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:24:49.859725 kernel: NX (Execute Disable) protection: active Apr 17 23:24:49.859729 kernel: APIC: Static calls initialized Apr 17 23:24:49.859734 kernel: efi: EFI v2.7 by EDK II Apr 17 23:24:49.859738 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 17 23:24:49.859766 kernel: SMBIOS 2.8 present. Apr 17 23:24:49.859772 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 17 23:24:49.859776 kernel: Hypervisor detected: KVM Apr 17 23:24:49.859782 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:24:49.859787 kernel: kvm-clock: using sched offset of 4729127746 cycles Apr 17 23:24:49.859792 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:24:49.859797 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:24:49.859802 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:24:49.859807 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:24:49.859812 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 17 23:24:49.859816 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:24:49.859821 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:24:49.859827 kernel: Using GB pages for direct mapping Apr 17 23:24:49.859832 kernel: Secure boot disabled Apr 17 23:24:49.859836 kernel: ACPI: Early table checksum verification disabled Apr 17 23:24:49.859841 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 23:24:49.859848 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:24:49.859853 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:24:49.859858 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:24:49.859865 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 23:24:49.859870 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:24:49.859875 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:24:49.859880 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:24:49.859884 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:24:49.859889 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:24:49.859894 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 23:24:49.859900 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 23:24:49.859905 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 23:24:49.859910 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 23:24:49.859915 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 23:24:49.859920 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 23:24:49.859925 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 23:24:49.859929 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 23:24:49.859934 kernel: No NUMA configuration found Apr 17 23:24:49.859939 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 17 23:24:49.859958 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 17 23:24:49.859963 kernel: Zone ranges: Apr 17 23:24:49.859968 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:24:49.859989 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 17 23:24:49.859995 kernel: Normal empty Apr 17 23:24:49.860000 kernel: Movable zone start for each node Apr 17 23:24:49.860005 kernel: Early memory node ranges Apr 17 23:24:49.860010 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:24:49.860014 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 23:24:49.860019 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 23:24:49.860026 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 17 23:24:49.860031 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 17 23:24:49.860036 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 17 23:24:49.860041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 17 23:24:49.860046 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:24:49.860051 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:24:49.860056 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 23:24:49.860061 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:24:49.860066 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 17 23:24:49.860072 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:24:49.860077 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 17 23:24:49.860082 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:24:49.860088 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:24:49.860093 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:24:49.860098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:24:49.860103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:24:49.860108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:24:49.860113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:24:49.860118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:24:49.860124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:24:49.860129 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:24:49.860134 kernel: TSC deadline timer available Apr 17 23:24:49.860139 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:24:49.860144 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:24:49.860149 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:24:49.860154 kernel: kvm-guest: setup PV sched yield Apr 17 23:24:49.860158 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:24:49.860163 kernel: Booting paravirtualized kernel on KVM Apr 17 23:24:49.860170 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:24:49.860175 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:24:49.860180 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:24:49.860185 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:24:49.860190 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:24:49.860197 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:24:49.860202 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:24:49.860208 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:24:49.860215 kernel: random: crng init done Apr 17 23:24:49.860220 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:24:49.860225 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:24:49.860230 kernel: Fallback order for Node 0: 0 Apr 17 23:24:49.860235 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 17 23:24:49.860240 kernel: Policy zone: DMA32 Apr 17 23:24:49.860245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:24:49.860250 kernel: Memory: 2399656K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 167140K reserved, 0K cma-reserved) Apr 17 23:24:49.860255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:24:49.860261 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:24:49.860266 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:24:49.860271 kernel: Dynamic Preempt: voluntary Apr 17 23:24:49.860276 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:24:49.860286 kernel: rcu: RCU event tracing is enabled. Apr 17 23:24:49.860293 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:24:49.860298 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:24:49.860304 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:24:49.860309 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:24:49.860315 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:24:49.860320 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:24:49.860326 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:24:49.860333 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:24:49.860338 kernel: Console: colour dummy device 80x25 Apr 17 23:24:49.860344 kernel: printk: console [ttyS0] enabled Apr 17 23:24:49.860349 kernel: ACPI: Core revision 20230628 Apr 17 23:24:49.860355 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:24:49.860362 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:24:49.860367 kernel: x2apic enabled Apr 17 23:24:49.860373 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:24:49.860378 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:24:49.860384 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:24:49.860390 kernel: kvm-guest: setup PV IPIs Apr 17 23:24:49.860395 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:24:49.860401 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:24:49.860406 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:24:49.860413 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:24:49.860419 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:24:49.860424 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:24:49.860430 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:24:49.860435 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:24:49.860441 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:24:49.860446 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:24:49.860452 kernel: RETBleed: Vulnerable Apr 17 23:24:49.860457 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:24:49.860464 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:24:49.860470 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:24:49.860475 kernel: active return thunk: its_return_thunk Apr 17 23:24:49.860481 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:24:49.860486 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:24:49.860492 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:24:49.860498 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:24:49.860503 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:24:49.860509 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:24:49.860515 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:24:49.860521 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:24:49.860526 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:24:49.860532 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:24:49.860537 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:24:49.860543 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:24:49.860548 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:24:49.860554 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:24:49.860559 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:24:49.860566 kernel: landlock: Up and running. Apr 17 23:24:49.860572 kernel: SELinux: Initializing. Apr 17 23:24:49.860577 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:24:49.860583 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:24:49.860588 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:24:49.860594 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:24:49.860599 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:24:49.860605 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:24:49.860612 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:24:49.860617 kernel: signal: max sigframe size: 3632 Apr 17 23:24:49.860623 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:24:49.860628 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:24:49.860634 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:24:49.860639 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:24:49.860645 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:24:49.860650 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:24:49.860656 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:24:49.860662 kernel: smpboot: Max logical packages: 1 Apr 17 23:24:49.860668 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:24:49.860673 kernel: devtmpfs: initialized Apr 17 23:24:49.860679 kernel: x86/mm: Memory block size: 128MB Apr 17 23:24:49.860684 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 23:24:49.860690 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 23:24:49.860695 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 17 23:24:49.860701 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 23:24:49.860706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 23:24:49.860713 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:24:49.860719 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:24:49.860724 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:24:49.860729 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:24:49.860735 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:24:49.860740 kernel: audit: type=2000 audit(1776468289.706:1): state=initialized audit_enabled=0 res=1 Apr 17 23:24:49.860762 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:24:49.860768 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:24:49.860773 kernel: cpuidle: using governor menu Apr 17 23:24:49.860780 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:24:49.860785 kernel: dca service started, version 1.12.1 Apr 17 23:24:49.860791 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:24:49.860796 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:24:49.860802 kernel: PCI: Using configuration type 1 for base access Apr 17 23:24:49.860807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:24:49.860813 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:24:49.860819 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:24:49.860824 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:24:49.860831 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:24:49.860836 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:24:49.860842 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:24:49.860847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:24:49.860853 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:24:49.860858 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:24:49.860864 kernel: ACPI: Interpreter enabled Apr 17 23:24:49.860869 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:24:49.860875 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:24:49.860881 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:24:49.860887 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:24:49.860892 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:24:49.860897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:24:49.861063 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:24:49.861127 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:24:49.861182 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:24:49.861191 kernel: PCI host bridge to bus 0000:00 Apr 17 23:24:49.861250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:24:49.861300 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:24:49.861348 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:24:49.861396 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:24:49.861444 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:24:49.861492 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 17 23:24:49.861542 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:24:49.861609 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:24:49.861672 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:24:49.861728 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 17 23:24:49.861810 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 17 23:24:49.861866 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:24:49.861920 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 17 23:24:49.861994 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:24:49.862060 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:24:49.862116 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 17 23:24:49.862171 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 17 23:24:49.862226 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 17 23:24:49.862286 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:24:49.862341 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 17 23:24:49.862398 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 17 23:24:49.862452 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 17 23:24:49.862509 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:24:49.862563 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 17 23:24:49.862618 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 17 23:24:49.862672 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 17 23:24:49.862728 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 17 23:24:49.862815 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:24:49.862871 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:24:49.862929 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:24:49.863004 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 17 23:24:49.863058 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 17 23:24:49.863116 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:24:49.863173 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 17 23:24:49.863181 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:24:49.863186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:24:49.863192 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:24:49.863197 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:24:49.863203 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:24:49.863208 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:24:49.863214 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:24:49.863221 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:24:49.863226 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:24:49.863231 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:24:49.863237 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:24:49.863242 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:24:49.863248 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:24:49.863253 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:24:49.863259 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:24:49.863264 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:24:49.863271 kernel: iommu: Default domain type: Translated Apr 17 23:24:49.863276 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:24:49.863282 kernel: efivars: Registered efivars operations Apr 17 23:24:49.863287 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:24:49.863293 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:24:49.863298 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 23:24:49.863304 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 17 23:24:49.863309 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 17 23:24:49.863314 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 17 23:24:49.863370 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:24:49.863424 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:24:49.863479 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:24:49.863486 kernel: vgaarb: loaded Apr 17 23:24:49.863492 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:24:49.863498 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:24:49.863503 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:24:49.863509 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:24:49.863514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:24:49.863521 kernel: pnp: PnP ACPI init Apr 17 23:24:49.863579 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:24:49.863587 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:24:49.863592 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:24:49.863598 kernel: NET: Registered PF_INET protocol family Apr 17 23:24:49.863603 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:24:49.863609 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:24:49.863615 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:24:49.863622 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:24:49.863628 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:24:49.863633 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:24:49.863639 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:24:49.863644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:24:49.863650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:24:49.863655 kernel: NET: Registered PF_XDP protocol family Apr 17 23:24:49.863711 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 17 23:24:49.863789 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 17 23:24:49.863847 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:24:49.863897 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:24:49.863961 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:24:49.864013 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:24:49.864062 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:24:49.864110 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 17 23:24:49.864117 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:24:49.864123 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:24:49.864130 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:24:49.864136 kernel: Initialise system trusted keyrings Apr 17 23:24:49.864141 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:24:49.864147 kernel: Key type asymmetric registered Apr 17 23:24:49.864152 kernel: Asymmetric key parser 'x509' registered Apr 17 23:24:49.864158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:24:49.864163 kernel: io scheduler mq-deadline registered Apr 17 23:24:49.864168 kernel: io scheduler kyber registered Apr 17 23:24:49.864174 kernel: io scheduler bfq registered Apr 17 23:24:49.864181 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:24:49.864187 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:24:49.864192 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:24:49.864198 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:24:49.864203 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:24:49.864209 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:24:49.864214 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:24:49.864220 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:24:49.864225 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:24:49.864285 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:24:49.864293 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:24:49.864343 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:24:49.864393 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:24:49 UTC (1776468289) Apr 17 23:24:49.864444 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 23:24:49.864450 kernel: intel_pstate: CPU model not supported Apr 17 23:24:49.864456 kernel: efifb: probing for efifb Apr 17 23:24:49.864461 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 17 23:24:49.864468 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 17 23:24:49.864474 kernel: efifb: scrolling: redraw Apr 17 23:24:49.864479 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 17 23:24:49.864485 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:24:49.864490 kernel: fb0: EFI VGA frame buffer device Apr 17 23:24:49.864505 kernel: pstore: Using crash dump compression: deflate Apr 17 23:24:49.864512 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:24:49.864518 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:24:49.864523 kernel: Segment Routing with IPv6 Apr 17 23:24:49.864530 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:24:49.864535 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:24:49.864541 kernel: Key type dns_resolver registered Apr 17 23:24:49.864546 kernel: IPI shorthand broadcast: enabled Apr 17 23:24:49.864551 kernel: sched_clock: Marking stable (700074001, 207719959)->(957775148, -49981188) Apr 17 23:24:49.864557 kernel: registered taskstats version 1 Apr 17 23:24:49.864562 kernel: Loading compiled-in X.509 certificates Apr 17 23:24:49.864568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:24:49.864574 kernel: Key type .fscrypt registered Apr 17 23:24:49.864580 kernel: Key type fscrypt-provisioning registered Apr 17 23:24:49.864586 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:24:49.864591 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:24:49.864597 kernel: ima: No architecture policies found Apr 17 23:24:49.864602 kernel: clk: Disabling unused clocks Apr 17 23:24:49.864608 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:24:49.864614 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:24:49.864620 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:24:49.864625 kernel: Run /init as init process Apr 17 23:24:49.864632 kernel: with arguments: Apr 17 23:24:49.864637 kernel: /init Apr 17 23:24:49.864643 kernel: with environment: Apr 17 23:24:49.864648 kernel: HOME=/ Apr 17 23:24:49.864654 kernel: TERM=linux Apr 17 23:24:49.864661 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:24:49.864669 systemd[1]: Detected virtualization kvm. Apr 17 23:24:49.864677 systemd[1]: Detected architecture x86-64. Apr 17 23:24:49.864682 systemd[1]: Running in initrd. Apr 17 23:24:49.864688 systemd[1]: No hostname configured, using default hostname. Apr 17 23:24:49.864694 systemd[1]: Hostname set to . Apr 17 23:24:49.864700 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:24:49.864707 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:24:49.864713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:24:49.864719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:24:49.864726 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:24:49.864731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:24:49.864737 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:24:49.864764 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:24:49.864772 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:24:49.864780 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:24:49.864786 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:24:49.864792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:24:49.864798 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:24:49.864804 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:24:49.864812 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:24:49.864818 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:24:49.864824 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:24:49.864831 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:24:49.864837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:24:49.864843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:24:49.864849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:24:49.864855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:24:49.864861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:24:49.864867 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:24:49.864873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:24:49.864881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:24:49.864887 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:24:49.864893 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:24:49.864899 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:24:49.864905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:24:49.864911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:24:49.864917 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:24:49.864923 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:24:49.864940 systemd-journald[193]: Collecting audit messages is disabled. Apr 17 23:24:49.864970 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:24:49.864978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:24:49.864985 systemd-journald[193]: Journal started Apr 17 23:24:49.865001 systemd-journald[193]: Runtime Journal (/run/log/journal/69a091d479dd4af1a2f3e7adc6f4e2f7) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:24:49.866817 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:24:49.867259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:24:49.868730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:24:49.869383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:24:49.881942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:24:49.882691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:24:49.890895 systemd-modules-load[194]: Inserted module 'overlay' Apr 17 23:24:49.893667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:24:49.906983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:24:49.915817 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:24:49.918016 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 17 23:24:49.918721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:24:49.919168 kernel: Bridge firewalling registered Apr 17 23:24:49.919713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:24:49.929126 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:24:49.932185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:24:49.932905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:24:49.936397 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:24:49.947162 dracut-cmdline[229]: dracut-dracut-053 Apr 17 23:24:49.949098 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:24:49.957451 systemd-resolved[228]: Positive Trust Anchors: Apr 17 23:24:49.957474 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:24:49.957498 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:24:49.959378 systemd-resolved[228]: Defaulting to hostname 'linux'. Apr 17 23:24:49.960114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:24:49.970790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:24:50.017798 kernel: SCSI subsystem initialized Apr 17 23:24:50.025790 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:24:50.034799 kernel: iscsi: registered transport (tcp) Apr 17 23:24:50.052103 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:24:50.052138 kernel: QLogic iSCSI HBA Driver Apr 17 23:24:50.081592 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:24:50.090981 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:24:50.110792 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:24:50.110830 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:24:50.113037 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:24:50.148794 kernel: raid6: avx512x4 gen() 46289 MB/s Apr 17 23:24:50.165788 kernel: raid6: avx512x2 gen() 45602 MB/s Apr 17 23:24:50.182798 kernel: raid6: avx512x1 gen() 45877 MB/s Apr 17 23:24:50.199787 kernel: raid6: avx2x4 gen() 37769 MB/s Apr 17 23:24:50.216788 kernel: raid6: avx2x2 gen() 37341 MB/s Apr 17 23:24:50.234356 kernel: raid6: avx2x1 gen() 29516 MB/s Apr 17 23:24:50.234377 kernel: raid6: using algorithm avx512x4 gen() 46289 MB/s Apr 17 23:24:50.252338 kernel: raid6: .... xor() 10571 MB/s, rmw enabled Apr 17 23:24:50.252369 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:24:50.270792 kernel: xor: automatically using best checksumming function avx Apr 17 23:24:50.419806 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:24:50.429150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:24:50.444988 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:24:50.454667 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 17 23:24:50.457238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:24:50.461683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:24:50.475383 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 17 23:24:50.498913 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:24:50.512984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:24:50.548899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:24:50.561101 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:24:50.570932 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:24:50.573919 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:24:50.585554 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:24:50.576921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:24:50.593433 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:24:50.593455 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:24:50.577834 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:24:50.586379 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:24:50.602795 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:24:50.608397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:24:50.608467 kernel: GPT:9289727 != 19775487 Apr 17 23:24:50.608496 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:24:50.608511 kernel: GPT:9289727 != 19775487 Apr 17 23:24:50.608524 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:24:50.608538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:24:50.612681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:24:50.613056 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:24:50.616763 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:24:50.618980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:24:50.631684 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 17 23:24:50.619145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:24:50.620411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:24:50.637410 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (472) Apr 17 23:24:50.637426 kernel: libata version 3.00 loaded. Apr 17 23:24:50.638788 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:24:50.639089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:24:50.642723 kernel: AES CTR mode by8 optimization enabled Apr 17 23:24:50.649793 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:24:50.649925 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:24:50.653793 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:24:50.653927 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:24:50.653833 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:24:50.658205 kernel: scsi host0: ahci Apr 17 23:24:50.658307 kernel: scsi host1: ahci Apr 17 23:24:50.659054 kernel: scsi host2: ahci Apr 17 23:24:50.660065 kernel: scsi host3: ahci Apr 17 23:24:50.660147 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:24:50.663383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:24:50.664783 kernel: scsi host4: ahci Apr 17 23:24:50.667019 kernel: scsi host5: ahci Apr 17 23:24:50.667145 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 17 23:24:50.667154 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 17 23:24:50.669618 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 17 23:24:50.669630 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 17 23:24:50.672313 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 17 23:24:50.672324 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 17 23:24:50.676403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:24:50.679801 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:24:50.688890 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:24:50.690675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:24:50.690721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:24:50.693649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:24:50.694851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:24:50.703829 disk-uuid[556]: Primary Header is updated. Apr 17 23:24:50.703829 disk-uuid[556]: Secondary Entries is updated. Apr 17 23:24:50.703829 disk-uuid[556]: Secondary Header is updated. Apr 17 23:24:50.706787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:24:50.710151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:24:50.720923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:24:50.731703 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:24:50.980812 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:24:50.989784 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:24:50.989845 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:24:50.989856 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:24:50.992787 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:24:50.992801 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:24:50.993791 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:24:50.995386 kernel: ata3.00: applying bridge limits Apr 17 23:24:50.995398 kernel: ata3.00: configured for UDMA/100 Apr 17 23:24:50.995796 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:24:51.036694 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:24:51.036948 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:24:51.052869 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:24:51.714217 disk-uuid[559]: The operation has completed successfully. Apr 17 23:24:51.715592 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:24:51.734027 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:24:51.734113 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:24:51.751033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:24:51.754526 sh[600]: Success Apr 17 23:24:51.764786 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:24:51.792521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:24:51.807987 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:24:51.811871 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:24:51.820099 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:24:51.820125 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:24:51.820134 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:24:51.822529 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:24:51.822539 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:24:51.827869 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:24:51.830115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:24:51.841872 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:24:51.843116 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:24:51.856246 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:24:51.856277 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:24:51.856285 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:24:51.859783 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:24:51.865532 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:24:51.867921 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:24:51.873711 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:24:51.880089 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:24:51.920145 ignition[704]: Ignition 2.19.0 Apr 17 23:24:51.920158 ignition[704]: Stage: fetch-offline Apr 17 23:24:51.920185 ignition[704]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:24:51.920192 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:24:51.920256 ignition[704]: parsed url from cmdline: "" Apr 17 23:24:51.920258 ignition[704]: no config URL provided Apr 17 23:24:51.920261 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:24:51.920266 ignition[704]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:24:51.920285 ignition[704]: op(1): [started] loading QEMU firmware config module Apr 17 23:24:51.920289 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:24:51.929152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:24:51.932946 ignition[704]: op(1): [finished] loading QEMU firmware config module Apr 17 23:24:51.942928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:24:51.959673 systemd-networkd[790]: lo: Link UP Apr 17 23:24:51.959810 systemd-networkd[790]: lo: Gained carrier Apr 17 23:24:51.962829 systemd-networkd[790]: Enumeration completed Apr 17 23:24:51.963432 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:24:51.967001 systemd[1]: Reached target network.target - Network. Apr 17 23:24:51.969630 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:24:51.969642 systemd-networkd[790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:24:51.975149 systemd-networkd[790]: eth0: Link UP Apr 17 23:24:51.975153 systemd-networkd[790]: eth0: Gained carrier Apr 17 23:24:51.975159 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:24:51.992829 systemd-networkd[790]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:24:52.041653 ignition[704]: parsing config with SHA512: 05a01a41cdfa8fcf8f2dc8e85c7f379a914f45bdce7ef68c6f9f3cf134edb6fa30078fcc898f9f1afeb695a7ef75a5e4b7dc81b891e582d8e9971eb273e5be32 Apr 17 23:24:52.046240 unknown[704]: fetched base config from "system" Apr 17 23:24:52.046254 unknown[704]: fetched user config from "qemu" Apr 17 23:24:52.046843 ignition[704]: fetch-offline: fetch-offline passed Apr 17 23:24:52.046905 ignition[704]: Ignition finished successfully Apr 17 23:24:52.050808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:24:52.054205 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:24:52.064921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:24:52.078590 ignition[794]: Ignition 2.19.0 Apr 17 23:24:52.078616 ignition[794]: Stage: kargs Apr 17 23:24:52.078817 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:24:52.078827 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:24:52.079470 ignition[794]: kargs: kargs passed Apr 17 23:24:52.079500 ignition[794]: Ignition finished successfully Apr 17 23:24:52.084836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:24:52.094959 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:24:52.104408 ignition[802]: Ignition 2.19.0 Apr 17 23:24:52.104423 ignition[802]: Stage: disks Apr 17 23:24:52.104546 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:24:52.104553 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:24:52.105200 ignition[802]: disks: disks passed Apr 17 23:24:52.105238 ignition[802]: Ignition finished successfully Apr 17 23:24:52.110551 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:24:52.113088 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:24:52.113624 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:24:52.116356 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:24:52.119340 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:24:52.121771 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:24:52.134983 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:24:52.144645 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:24:52.149056 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:24:52.155390 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:24:52.232615 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:24:52.234702 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:24:52.233594 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:24:52.244877 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:24:52.246381 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:24:52.248261 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:24:52.248292 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:24:52.259974 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (820) Apr 17 23:24:52.259993 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:24:52.260003 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:24:52.260011 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:24:52.248309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:24:52.258574 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:24:52.260779 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:24:52.269287 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:24:52.268982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:24:52.295920 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:24:52.300233 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:24:52.303301 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:24:52.306950 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:24:52.366904 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:24:52.377848 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:24:52.380888 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:24:52.385320 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:24:52.399631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:24:52.402101 ignition[933]: INFO : Ignition 2.19.0 Apr 17 23:24:52.402101 ignition[933]: INFO : Stage: mount Apr 17 23:24:52.402101 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:24:52.402101 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:24:52.402101 ignition[933]: INFO : mount: mount passed Apr 17 23:24:52.402101 ignition[933]: INFO : Ignition finished successfully Apr 17 23:24:52.407109 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:24:52.417845 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:24:52.818782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:24:52.830045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:24:52.835783 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Apr 17 23:24:52.838441 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:24:52.838457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:24:52.838470 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:24:52.841774 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:24:52.843057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:24:52.858885 ignition[963]: INFO : Ignition 2.19.0 Apr 17 23:24:52.858885 ignition[963]: INFO : Stage: files Apr 17 23:24:52.862096 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:24:52.862096 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:24:52.862096 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:24:52.862096 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:24:52.862096 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:24:52.870625 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:24:52.870625 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:24:52.870625 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:24:52.870625 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:24:52.870625 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:24:52.862833 unknown[963]: wrote ssh authorized keys file for user: core Apr 17 23:24:52.918847 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:24:53.028446 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:24:53.028446 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:24:53.028446 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:24:53.384380 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:24:53.629067 systemd-networkd[790]: eth0: Gained IPv6LL Apr 17 23:24:54.061564 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:24:54.061564 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:24:54.066989 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:24:54.449258 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:24:59.570553 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:24:59.570553 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 17 23:24:59.575819 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:24:59.601726 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:24:59.607218 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:24:59.609300 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:24:59.609300 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:24:59.609300 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:24:59.609300 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:24:59.609300 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:24:59.609300 ignition[963]: INFO : files: files passed Apr 17 23:24:59.609300 ignition[963]: INFO : Ignition finished successfully Apr 17 23:24:59.613981 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:24:59.624943 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:24:59.628066 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:24:59.634460 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:24:59.634561 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:24:59.639267 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:24:59.641206 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:24:59.641206 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:24:59.645537 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:24:59.648262 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:24:59.649412 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:24:59.663963 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:24:59.684988 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:24:59.685123 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:24:59.688045 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:24:59.690707 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:24:59.693218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:24:59.696492 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:24:59.709173 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:24:59.712422 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:24:59.723273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:24:59.724229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:24:59.727148 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:24:59.731794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:24:59.732165 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:24:59.736240 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:24:59.736830 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:24:59.741086 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:24:59.741695 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:24:59.744372 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:24:59.747243 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:24:59.750280 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:24:59.753511 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:24:59.756349 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:24:59.758796 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:24:59.761082 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:24:59.761186 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:24:59.764530 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:24:59.767174 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:24:59.769888 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:24:59.770388 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:24:59.772072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:24:59.772158 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:24:59.777173 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:24:59.777276 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:24:59.779851 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:24:59.782149 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:24:59.784197 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:24:59.787534 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:24:59.789975 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:24:59.793059 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:24:59.793132 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:24:59.795426 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:24:59.795488 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:24:59.797740 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:24:59.797843 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:24:59.800445 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:24:59.800538 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:24:59.820996 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:24:59.823956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:24:59.825182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:24:59.825275 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:24:59.828029 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:24:59.828160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:24:59.833101 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:24:59.833178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:24:59.840152 ignition[1018]: INFO : Ignition 2.19.0 Apr 17 23:24:59.840152 ignition[1018]: INFO : Stage: umount Apr 17 23:24:59.840152 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:24:59.840152 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:24:59.840152 ignition[1018]: INFO : umount: umount passed Apr 17 23:24:59.840152 ignition[1018]: INFO : Ignition finished successfully Apr 17 23:24:59.839372 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:24:59.839479 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:24:59.840922 systemd[1]: Stopped target network.target - Network. Apr 17 23:24:59.843641 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:24:59.843691 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:24:59.846713 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:24:59.846805 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:24:59.849444 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:24:59.849479 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:24:59.851741 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:24:59.851806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:24:59.854256 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:24:59.856666 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:24:59.859779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:24:59.860286 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:24:59.860359 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:24:59.863575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:24:59.863642 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:24:59.863868 systemd-networkd[790]: eth0: DHCPv6 lease lost Apr 17 23:24:59.866073 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:24:59.866164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:24:59.872522 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:24:59.872603 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:24:59.877362 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:24:59.877392 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:24:59.888260 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:24:59.890482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:24:59.890520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:24:59.894844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:24:59.894873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:24:59.907127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:24:59.907185 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:24:59.909840 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:24:59.909870 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:24:59.912774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:24:59.929513 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:24:59.929609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:24:59.931543 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:24:59.931639 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:24:59.934128 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:24:59.934179 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:24:59.936437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:24:59.936459 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:24:59.939327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:24:59.939355 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:24:59.945091 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:24:59.945129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:24:59.948666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:24:59.948696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:24:59.959881 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:24:59.960504 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:24:59.960543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:24:59.963334 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:24:59.963361 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:24:59.966234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:24:59.966267 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:24:59.969317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:24:59.969343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:24:59.972497 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:24:59.972577 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:24:59.975679 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:24:59.992936 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:24:59.998450 systemd[1]: Switching root. Apr 17 23:25:00.026569 systemd-journald[193]: Journal stopped Apr 17 23:25:00.670881 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 17 23:25:00.670930 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:25:00.670949 kernel: SELinux: policy capability open_perms=1 Apr 17 23:25:00.670960 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:25:00.670968 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:25:00.670976 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:25:00.670984 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:25:00.670991 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:25:00.670998 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:25:00.671009 kernel: audit: type=1403 audit(1776468300.141:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:25:00.671036 systemd[1]: Successfully loaded SELinux policy in 34.448ms. Apr 17 23:25:00.671052 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.096ms. Apr 17 23:25:00.671061 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:25:00.671069 systemd[1]: Detected virtualization kvm. Apr 17 23:25:00.671077 systemd[1]: Detected architecture x86-64. Apr 17 23:25:00.671085 systemd[1]: Detected first boot. Apr 17 23:25:00.671093 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:25:00.671101 zram_generator::config[1062]: No configuration found. Apr 17 23:25:00.671110 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:25:00.671121 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:25:00.671129 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:25:00.671137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:25:00.671145 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:25:00.671153 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:25:00.671161 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:25:00.671169 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:25:00.671177 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:25:00.671186 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:25:00.671194 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:25:00.671202 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:25:00.671210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:25:00.671218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:25:00.671226 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:25:00.671234 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:25:00.671242 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:25:00.671252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:25:00.671261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:25:00.671269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:25:00.671277 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:25:00.671284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:25:00.671293 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:25:00.671301 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:25:00.671308 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:25:00.671316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:25:00.671325 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:25:00.671333 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:25:00.671340 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:25:00.671348 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:25:00.671356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:25:00.671363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:25:00.671371 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:25:00.671378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:25:00.671385 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:25:00.671394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:25:00.671402 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:25:00.671410 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:00.671418 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:25:00.671426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:25:00.671433 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:25:00.671441 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:25:00.671449 systemd[1]: Reached target machines.target - Containers. Apr 17 23:25:00.671459 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:25:00.671468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:25:00.671475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:25:00.671484 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:25:00.671491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:25:00.671499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:25:00.671507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:25:00.671514 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:25:00.671522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:25:00.671531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:25:00.671539 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:25:00.671547 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:25:00.671554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:25:00.671562 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:25:00.671569 kernel: fuse: init (API version 7.39) Apr 17 23:25:00.671576 kernel: loop: module loaded Apr 17 23:25:00.671584 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:25:00.671591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:25:00.671600 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:25:00.671619 systemd-journald[1139]: Collecting audit messages is disabled. Apr 17 23:25:00.671634 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:25:00.671644 systemd-journald[1139]: Journal started Apr 17 23:25:00.671660 systemd-journald[1139]: Runtime Journal (/run/log/journal/69a091d479dd4af1a2f3e7adc6f4e2f7) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:25:00.440327 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:25:00.457560 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:25:00.457927 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:25:00.674929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:25:00.679175 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:25:00.679204 systemd[1]: Stopped verity-setup.service. Apr 17 23:25:00.682844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:00.686460 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:25:00.687391 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:25:00.689816 kernel: ACPI: bus type drm_connector registered Apr 17 23:25:00.689833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:25:00.691413 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:25:00.692855 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:25:00.694346 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:25:00.695867 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:25:00.697432 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:25:00.699174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:25:00.700989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:25:00.701219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:25:00.702977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:25:00.703121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:25:00.704969 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:25:00.705105 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:25:00.706652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:25:00.706790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:25:00.708651 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:25:00.708858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:25:00.710482 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:25:00.710591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:25:00.712311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:25:00.713981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:25:00.715979 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:25:00.718505 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:25:00.727061 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:25:00.736918 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:25:00.740162 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:25:00.741683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:25:00.741709 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:25:00.743739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:25:00.746977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:25:00.749313 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:25:00.751456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:25:00.752573 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:25:00.755271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:25:00.757420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:25:00.759394 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:25:00.761479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:25:00.763530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:25:00.763722 systemd-journald[1139]: Time spent on flushing to /var/log/journal/69a091d479dd4af1a2f3e7adc6f4e2f7 is 9.837ms for 999 entries. Apr 17 23:25:00.763722 systemd-journald[1139]: System Journal (/var/log/journal/69a091d479dd4af1a2f3e7adc6f4e2f7) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:25:00.780011 systemd-journald[1139]: Received client request to flush runtime journal. Apr 17 23:25:00.771200 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:25:00.775777 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:25:00.780666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:25:00.784297 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:25:00.786692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:25:00.788769 kernel: loop0: detected capacity change from 0 to 140768 Apr 17 23:25:00.790490 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:25:00.793257 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:25:00.796580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:25:00.805992 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:25:00.809973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:25:00.815269 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:25:00.818016 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:25:00.821738 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:25:00.824617 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Apr 17 23:25:00.824629 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Apr 17 23:25:00.829370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:25:00.841910 kernel: loop1: detected capacity change from 0 to 228704 Apr 17 23:25:00.843971 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:25:00.847138 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:25:00.848009 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:25:00.871818 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:25:00.879405 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:25:00.880983 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:25:00.894681 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 17 23:25:00.894705 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 17 23:25:00.898485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:25:00.911791 kernel: loop3: detected capacity change from 0 to 140768 Apr 17 23:25:00.922880 kernel: loop4: detected capacity change from 0 to 228704 Apr 17 23:25:00.931797 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:25:00.941013 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:25:00.941582 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 17 23:25:00.945501 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:25:00.945520 systemd[1]: Reloading... Apr 17 23:25:00.991932 zram_generator::config[1228]: No configuration found. Apr 17 23:25:01.039786 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:25:01.062853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:25:01.091566 systemd[1]: Reloading finished in 145 ms. Apr 17 23:25:01.123586 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:25:01.125467 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:25:01.150197 systemd[1]: Starting ensure-sysext.service... Apr 17 23:25:01.152456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:25:01.164477 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:25:01.164492 systemd[1]: Reloading... Apr 17 23:25:01.167083 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:25:01.167294 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:25:01.167807 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:25:01.167976 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 17 23:25:01.168023 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 17 23:25:01.169690 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:25:01.169707 systemd-tmpfiles[1267]: Skipping /boot Apr 17 23:25:01.174730 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:25:01.174881 systemd-tmpfiles[1267]: Skipping /boot Apr 17 23:25:01.211787 zram_generator::config[1295]: No configuration found. Apr 17 23:25:01.286048 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:25:01.318278 systemd[1]: Reloading finished in 153 ms. Apr 17 23:25:01.333822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:25:01.346483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:25:01.354077 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:25:01.356779 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:25:01.359360 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:25:01.364930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:25:01.368576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:25:01.371672 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:25:01.375859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:01.375981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:25:01.376859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:25:01.382590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:25:01.386722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:25:01.388607 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:25:01.393003 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:25:01.394974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:01.395786 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:25:01.396152 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Apr 17 23:25:01.397516 augenrules[1358]: No rules Apr 17 23:25:01.398398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:25:01.398584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:25:01.401366 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:25:01.404100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:25:01.404217 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:25:01.406833 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:25:01.407062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:25:01.414431 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:25:01.416895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:01.417111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:25:01.423016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:25:01.428216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:25:01.431987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:25:01.434103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:25:01.441088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:25:01.444977 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:25:01.446684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:01.447558 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:25:01.451632 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:25:01.464896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1383) Apr 17 23:25:01.456649 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:25:01.458674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:25:01.458896 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:25:01.460907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:25:01.461000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:25:01.462965 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:25:01.463068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:25:01.486124 systemd[1]: Finished ensure-sysext.service. Apr 17 23:25:01.495491 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:25:01.496907 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:25:01.499268 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:25:01.501590 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:01.501727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:25:01.506332 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 23:25:01.510929 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:25:01.511028 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:25:01.512209 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:25:01.507151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:25:01.511829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:25:01.531296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:25:01.538261 systemd-networkd[1394]: lo: Link UP Apr 17 23:25:01.538267 systemd-networkd[1394]: lo: Gained carrier Apr 17 23:25:01.540145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:25:01.541687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:25:01.542865 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:25:01.544739 systemd-networkd[1394]: Enumeration completed Apr 17 23:25:01.545712 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:25:01.545714 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:25:01.546340 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:25:01.546357 systemd-networkd[1394]: eth0: Link UP Apr 17 23:25:01.546359 systemd-networkd[1394]: eth0: Gained carrier Apr 17 23:25:01.546365 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:25:01.547674 systemd-resolved[1338]: Positive Trust Anchors: Apr 17 23:25:01.547682 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:25:01.547724 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:25:01.551799 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 17 23:25:01.548520 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:25:01.552461 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:25:01.552497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:25:01.552731 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:25:01.554254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:25:01.554378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:25:01.555744 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:25:01.556448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:25:01.561867 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:25:01.563889 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:25:01.565721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:25:01.566481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:25:01.567612 systemd-resolved[1338]: Defaulting to hostname 'linux'. Apr 17 23:25:01.568962 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:25:01.569547 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:25:01.569730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:25:01.582061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:25:01.594274 systemd[1]: Reached target network.target - Network. Apr 17 23:25:01.595144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:25:01.599144 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:25:01.601669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:25:01.602012 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:25:01.604969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:25:01.617375 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:25:01.633985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:25:01.634804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:25:01.647852 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:25:01.649987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:25:01.665899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:25:01.667835 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:25:02.684419 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:25:02.684483 systemd-timesyncd[1423]: Initial clock synchronization to Fri 2026-04-17 23:25:02.684206 UTC. Apr 17 23:25:02.684560 systemd-resolved[1338]: Clock change detected. Flushing caches. Apr 17 23:25:02.722473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:25:02.724578 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:25:02.735353 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:25:02.744396 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:25:02.776015 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:25:02.778152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:25:02.779703 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:25:02.781199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:25:02.782901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:25:02.784729 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:25:02.786408 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:25:02.788103 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:25:02.789786 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:25:02.789818 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:25:02.791038 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:25:02.792738 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:25:02.795272 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:25:02.807989 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:25:02.810565 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:25:02.812393 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:25:02.813854 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:25:02.815105 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:25:02.816346 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:25:02.816377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:25:02.817032 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:25:02.817091 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:25:02.819104 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:25:02.821734 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:25:02.825128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:25:02.825725 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:25:02.827334 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:25:02.831175 jq[1449]: false Apr 17 23:25:02.832293 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:25:02.835772 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:25:02.840402 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:25:02.841569 extend-filesystems[1450]: Found loop3 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found loop4 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found loop5 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found sr0 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda1 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda2 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda3 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found usr Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda4 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda6 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda7 Apr 17 23:25:02.841569 extend-filesystems[1450]: Found vda9 Apr 17 23:25:02.841569 extend-filesystems[1450]: Checking size of /dev/vda9 Apr 17 23:25:02.882951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1380) Apr 17 23:25:02.843842 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:25:02.853489 dbus-daemon[1448]: [system] SELinux support is enabled Apr 17 23:25:02.883295 extend-filesystems[1450]: Resized partition /dev/vda9 Apr 17 23:25:02.845797 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:25:02.893550 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:25:02.893610 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:25:02.846784 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:25:02.848995 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:25:02.901986 update_engine[1463]: I20260417 23:25:02.881755 1463 main.cc:92] Flatcar Update Engine starting Apr 17 23:25:02.901986 update_engine[1463]: I20260417 23:25:02.895937 1463 update_check_scheduler.cc:74] Next update check in 8m41s Apr 17 23:25:02.853346 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:25:02.902829 jq[1466]: true Apr 17 23:25:02.855950 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:25:02.860658 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:25:02.903047 tar[1473]: linux-amd64/LICENSE Apr 17 23:25:02.903047 tar[1473]: linux-amd64/helm Apr 17 23:25:02.865641 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:25:02.865782 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:25:02.865972 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:25:02.866090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:25:02.872646 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:25:02.873064 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:25:02.882324 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:25:02.882346 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:25:02.884078 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:25:02.884092 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:25:02.895782 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:25:02.902607 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:25:02.903423 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:25:02.903434 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:25:02.903693 systemd-logind[1462]: New seat seat0. Apr 17 23:25:02.908122 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:25:02.911316 jq[1474]: true Apr 17 23:25:02.918733 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:25:02.919118 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:25:02.932536 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:25:02.932536 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:25:02.932536 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:25:02.946349 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Apr 17 23:25:02.933592 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:25:02.933758 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:25:02.950349 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:25:02.962132 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:25:02.962986 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:25:02.966003 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:25:03.002883 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:25:03.020238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:25:03.027462 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:25:03.033568 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:25:03.033796 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:25:03.036971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:25:03.048543 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:25:03.058573 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:25:03.060597 containerd[1481]: time="2026-04-17T23:25:03.060532549Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:25:03.061038 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:25:03.062953 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:25:03.067700 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:25:03.070277 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:37066.service - OpenSSH per-connection server daemon (10.0.0.1:37066). Apr 17 23:25:03.080373 containerd[1481]: time="2026-04-17T23:25:03.080321978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081531 containerd[1481]: time="2026-04-17T23:25:03.081466618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081531 containerd[1481]: time="2026-04-17T23:25:03.081498649Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:25:03.081531 containerd[1481]: time="2026-04-17T23:25:03.081527590Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:25:03.081658 containerd[1481]: time="2026-04-17T23:25:03.081637596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:25:03.081685 containerd[1481]: time="2026-04-17T23:25:03.081658548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081724 containerd[1481]: time="2026-04-17T23:25:03.081697091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081724 containerd[1481]: time="2026-04-17T23:25:03.081705613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081859 containerd[1481]: time="2026-04-17T23:25:03.081819750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081859 containerd[1481]: time="2026-04-17T23:25:03.081843089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081859 containerd[1481]: time="2026-04-17T23:25:03.081857665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081917 containerd[1481]: time="2026-04-17T23:25:03.081864444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.081931 containerd[1481]: time="2026-04-17T23:25:03.081915966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.082079 containerd[1481]: time="2026-04-17T23:25:03.082047879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:25:03.082150 containerd[1481]: time="2026-04-17T23:25:03.082133309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:25:03.082166 containerd[1481]: time="2026-04-17T23:25:03.082149622Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:25:03.082214 containerd[1481]: time="2026-04-17T23:25:03.082199119Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:25:03.082278 containerd[1481]: time="2026-04-17T23:25:03.082263529Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:25:03.086570 containerd[1481]: time="2026-04-17T23:25:03.086520928Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:25:03.086570 containerd[1481]: time="2026-04-17T23:25:03.086559810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:25:03.086570 containerd[1481]: time="2026-04-17T23:25:03.086571752Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:25:03.086706 containerd[1481]: time="2026-04-17T23:25:03.086583382Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:25:03.086706 containerd[1481]: time="2026-04-17T23:25:03.086593655Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:25:03.086706 containerd[1481]: time="2026-04-17T23:25:03.086697261Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:25:03.086902 containerd[1481]: time="2026-04-17T23:25:03.086868677Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:25:03.086972 containerd[1481]: time="2026-04-17T23:25:03.086954957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:25:03.086988 containerd[1481]: time="2026-04-17T23:25:03.086975170Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:25:03.086988 containerd[1481]: time="2026-04-17T23:25:03.086984883Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:25:03.087013 containerd[1481]: time="2026-04-17T23:25:03.086994845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087013 containerd[1481]: time="2026-04-17T23:25:03.087006141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087037 containerd[1481]: time="2026-04-17T23:25:03.087017483Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087037 containerd[1481]: time="2026-04-17T23:25:03.087028110Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087065 containerd[1481]: time="2026-04-17T23:25:03.087038718Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087065 containerd[1481]: time="2026-04-17T23:25:03.087048424Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087065 containerd[1481]: time="2026-04-17T23:25:03.087057070Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087103 containerd[1481]: time="2026-04-17T23:25:03.087065231Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:25:03.087103 containerd[1481]: time="2026-04-17T23:25:03.087079456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087103 containerd[1481]: time="2026-04-17T23:25:03.087089031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087103 containerd[1481]: time="2026-04-17T23:25:03.087098929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087155 containerd[1481]: time="2026-04-17T23:25:03.087108023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087155 containerd[1481]: time="2026-04-17T23:25:03.087117921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087155 containerd[1481]: time="2026-04-17T23:25:03.087132269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087155 containerd[1481]: time="2026-04-17T23:25:03.087140929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087155 containerd[1481]: time="2026-04-17T23:25:03.087149722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087158453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087169352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087181831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087190017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087199230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087209126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:25:03.087271 containerd[1481]: time="2026-04-17T23:25:03.087271639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087408 containerd[1481]: time="2026-04-17T23:25:03.087282052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087408 containerd[1481]: time="2026-04-17T23:25:03.087289662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:25:03.087408 containerd[1481]: time="2026-04-17T23:25:03.087336572Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:25:03.087408 containerd[1481]: time="2026-04-17T23:25:03.087349221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:25:03.087408 containerd[1481]: time="2026-04-17T23:25:03.087357261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:25:03.087622 containerd[1481]: time="2026-04-17T23:25:03.087570046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:25:03.087622 containerd[1481]: time="2026-04-17T23:25:03.087620694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087660 containerd[1481]: time="2026-04-17T23:25:03.087635431Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:25:03.087660 containerd[1481]: time="2026-04-17T23:25:03.087644056Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:25:03.087687 containerd[1481]: time="2026-04-17T23:25:03.087663529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:25:03.087940 containerd[1481]: time="2026-04-17T23:25:03.087885811Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:25:03.088048 containerd[1481]: time="2026-04-17T23:25:03.087940711Z" level=info msg="Connect containerd service" Apr 17 23:25:03.089192 containerd[1481]: time="2026-04-17T23:25:03.089157471Z" level=info msg="using legacy CRI server" Apr 17 23:25:03.089192 containerd[1481]: time="2026-04-17T23:25:03.089183413Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:25:03.089336 containerd[1481]: time="2026-04-17T23:25:03.089281195Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:25:03.089887 containerd[1481]: time="2026-04-17T23:25:03.089830756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:25:03.090144 containerd[1481]: time="2026-04-17T23:25:03.090072854Z" level=info msg="Start subscribing containerd event" Apr 17 23:25:03.090208 containerd[1481]: time="2026-04-17T23:25:03.090174355Z" level=info msg="Start recovering state" Apr 17 23:25:03.090332 containerd[1481]: time="2026-04-17T23:25:03.090187153Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:25:03.092273 containerd[1481]: time="2026-04-17T23:25:03.090358977Z" level=info msg="Start event monitor" Apr 17 23:25:03.092273 containerd[1481]: time="2026-04-17T23:25:03.090440233Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:25:03.092273 containerd[1481]: time="2026-04-17T23:25:03.090443635Z" level=info msg="Start snapshots syncer" Apr 17 23:25:03.092273 containerd[1481]: time="2026-04-17T23:25:03.090470948Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:25:03.092273 containerd[1481]: time="2026-04-17T23:25:03.090482261Z" level=info msg="Start streaming server" Apr 17 23:25:03.092273 containerd[1481]: time="2026-04-17T23:25:03.090564349Z" level=info msg="containerd successfully booted in 0.030670s" Apr 17 23:25:03.090615 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:25:03.113162 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 37066 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:03.114862 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:03.121826 systemd-logind[1462]: New session 1 of user core. Apr 17 23:25:03.122640 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:25:03.133446 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:25:03.145133 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:25:03.154429 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:25:03.157914 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:25:03.239443 systemd[1541]: Queued start job for default target default.target. Apr 17 23:25:03.260283 systemd[1541]: Created slice app.slice - User Application Slice. Apr 17 23:25:03.260315 systemd[1541]: Reached target paths.target - Paths. Apr 17 23:25:03.260326 systemd[1541]: Reached target timers.target - Timers. Apr 17 23:25:03.261407 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:25:03.273031 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:25:03.273132 systemd[1541]: Reached target sockets.target - Sockets. Apr 17 23:25:03.273142 systemd[1541]: Reached target basic.target - Basic System. Apr 17 23:25:03.273171 systemd[1541]: Reached target default.target - Main User Target. Apr 17 23:25:03.273191 systemd[1541]: Startup finished in 110ms. Apr 17 23:25:03.273619 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:25:03.277557 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:25:03.322199 tar[1473]: linux-amd64/README.md Apr 17 23:25:03.337187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:25:03.343470 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:37076.service - OpenSSH per-connection server daemon (10.0.0.1:37076). Apr 17 23:25:03.385001 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 37076 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:03.386395 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:03.392317 systemd-logind[1462]: New session 2 of user core. Apr 17 23:25:03.402533 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:25:03.458017 sshd[1555]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:03.472684 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:37076.service: Deactivated successfully. Apr 17 23:25:03.474722 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:25:03.476477 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:25:03.484559 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:37078.service - OpenSSH per-connection server daemon (10.0.0.1:37078). Apr 17 23:25:03.487566 systemd-logind[1462]: Removed session 2. Apr 17 23:25:03.518797 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 37078 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:03.520031 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:03.526719 systemd-logind[1462]: New session 3 of user core. Apr 17 23:25:03.536495 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:25:03.594960 sshd[1562]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:03.598199 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:37078.service: Deactivated successfully. Apr 17 23:25:03.600021 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:25:03.601834 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:25:03.602866 systemd-logind[1462]: Removed session 3. Apr 17 23:25:04.499629 systemd-networkd[1394]: eth0: Gained IPv6LL Apr 17 23:25:04.502595 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:25:04.504981 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:25:04.514608 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:25:04.517579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:04.520340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:25:04.534738 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:25:04.534875 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:25:04.536765 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:25:04.539626 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:25:05.194269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:05.196307 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:25:05.197978 systemd[1]: Startup finished in 816ms (kernel) + 10.459s (initrd) + 4.073s (userspace) = 15.348s. Apr 17 23:25:05.200860 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:25:05.616015 kubelet[1590]: E0417 23:25:05.615953 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:25:05.618457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:25:05.618618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:25:13.609408 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:43748.service - OpenSSH per-connection server daemon (10.0.0.1:43748). Apr 17 23:25:13.641918 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 43748 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:13.642966 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:13.646733 systemd-logind[1462]: New session 4 of user core. Apr 17 23:25:13.660520 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:25:13.713416 sshd[1604]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:13.731402 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:43748.service: Deactivated successfully. Apr 17 23:25:13.732607 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:25:13.733556 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:25:13.734470 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:43754.service - OpenSSH per-connection server daemon (10.0.0.1:43754). Apr 17 23:25:13.735049 systemd-logind[1462]: Removed session 4. Apr 17 23:25:13.766849 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 43754 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:13.767888 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:13.771187 systemd-logind[1462]: New session 5 of user core. Apr 17 23:25:13.784374 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:25:13.834289 sshd[1611]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:13.856167 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:43754.service: Deactivated successfully. Apr 17 23:25:13.857503 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:25:13.858617 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:25:13.859729 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:43778.service - OpenSSH per-connection server daemon (10.0.0.1:43778). Apr 17 23:25:13.860935 systemd-logind[1462]: Removed session 5. Apr 17 23:25:13.941264 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 43778 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:13.942258 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:13.945431 systemd-logind[1462]: New session 6 of user core. Apr 17 23:25:13.954400 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:25:14.005267 sshd[1618]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:14.013095 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:43778.service: Deactivated successfully. Apr 17 23:25:14.014345 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:25:14.015255 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:25:14.016145 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:43782.service - OpenSSH per-connection server daemon (10.0.0.1:43782). Apr 17 23:25:14.016692 systemd-logind[1462]: Removed session 6. Apr 17 23:25:14.048255 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 43782 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:14.049351 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:14.052582 systemd-logind[1462]: New session 7 of user core. Apr 17 23:25:14.062536 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:25:14.118537 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:25:14.118761 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:25:14.131306 sudo[1628]: pam_unix(sudo:session): session closed for user root Apr 17 23:25:14.133264 sshd[1625]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:14.141094 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:43782.service: Deactivated successfully. Apr 17 23:25:14.142165 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:25:14.143156 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:25:14.144066 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:43794.service - OpenSSH per-connection server daemon (10.0.0.1:43794). Apr 17 23:25:14.144815 systemd-logind[1462]: Removed session 7. Apr 17 23:25:14.178891 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 43794 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:14.180030 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:14.184821 systemd-logind[1462]: New session 8 of user core. Apr 17 23:25:14.200566 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:25:14.252477 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:25:14.252726 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:25:14.255841 sudo[1637]: pam_unix(sudo:session): session closed for user root Apr 17 23:25:14.259665 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:25:14.259860 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:25:14.275472 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:25:14.276637 auditctl[1640]: No rules Apr 17 23:25:14.276877 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:25:14.277026 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:25:14.278731 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:25:14.300311 augenrules[1658]: No rules Apr 17 23:25:14.300824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:25:14.301545 sudo[1636]: pam_unix(sudo:session): session closed for user root Apr 17 23:25:14.302956 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:14.309101 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:43794.service: Deactivated successfully. Apr 17 23:25:14.310143 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:25:14.311259 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:25:14.312201 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:43820.service - OpenSSH per-connection server daemon (10.0.0.1:43820). Apr 17 23:25:14.312701 systemd-logind[1462]: Removed session 8. Apr 17 23:25:14.344506 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 43820 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:14.345661 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:14.349112 systemd-logind[1462]: New session 9 of user core. Apr 17 23:25:14.357392 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:25:14.408156 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:25:14.408397 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:25:14.634478 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:25:14.634602 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:25:14.868530 dockerd[1689]: time="2026-04-17T23:25:14.868471633Z" level=info msg="Starting up" Apr 17 23:25:14.952309 dockerd[1689]: time="2026-04-17T23:25:14.952012890Z" level=info msg="Loading containers: start." Apr 17 23:25:15.041247 kernel: Initializing XFRM netlink socket Apr 17 23:25:15.097833 systemd-networkd[1394]: docker0: Link UP Apr 17 23:25:15.122493 dockerd[1689]: time="2026-04-17T23:25:15.122358376Z" level=info msg="Loading containers: done." Apr 17 23:25:15.135492 dockerd[1689]: time="2026-04-17T23:25:15.135428438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:25:15.135653 dockerd[1689]: time="2026-04-17T23:25:15.135542930Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:25:15.135653 dockerd[1689]: time="2026-04-17T23:25:15.135635000Z" level=info msg="Daemon has completed initialization" Apr 17 23:25:15.166901 dockerd[1689]: time="2026-04-17T23:25:15.166805383Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:25:15.167005 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:25:15.733140 containerd[1481]: time="2026-04-17T23:25:15.733099368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:25:15.815908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:25:15.824418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:15.929960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:15.934694 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:25:15.975479 kubelet[1845]: E0417 23:25:15.975393 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:25:15.979168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:25:15.979317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:25:16.608952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818122295.mount: Deactivated successfully. Apr 17 23:25:17.251919 containerd[1481]: time="2026-04-17T23:25:17.251856385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:17.252624 containerd[1481]: time="2026-04-17T23:25:17.252524835Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 17 23:25:17.253458 containerd[1481]: time="2026-04-17T23:25:17.253417537Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:17.255628 containerd[1481]: time="2026-04-17T23:25:17.255585656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:17.258004 containerd[1481]: time="2026-04-17T23:25:17.256756088Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.523614593s" Apr 17 23:25:17.258004 containerd[1481]: time="2026-04-17T23:25:17.256786969Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:25:17.258503 containerd[1481]: time="2026-04-17T23:25:17.258482293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:25:18.109862 containerd[1481]: time="2026-04-17T23:25:18.109816596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:18.110876 containerd[1481]: time="2026-04-17T23:25:18.110825068Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 17 23:25:18.111816 containerd[1481]: time="2026-04-17T23:25:18.111766668Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:18.114099 containerd[1481]: time="2026-04-17T23:25:18.114057334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:18.114952 containerd[1481]: time="2026-04-17T23:25:18.114912526Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 856.40159ms" Apr 17 23:25:18.114952 containerd[1481]: time="2026-04-17T23:25:18.114944087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:25:18.115488 containerd[1481]: time="2026-04-17T23:25:18.115460509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:25:19.060675 containerd[1481]: time="2026-04-17T23:25:19.060583948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:19.061280 containerd[1481]: time="2026-04-17T23:25:19.061206007Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 17 23:25:19.062249 containerd[1481]: time="2026-04-17T23:25:19.062193449Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:19.064393 containerd[1481]: time="2026-04-17T23:25:19.064364507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:19.065226 containerd[1481]: time="2026-04-17T23:25:19.065190568Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 949.697765ms" Apr 17 23:25:19.065268 containerd[1481]: time="2026-04-17T23:25:19.065234218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:25:19.065765 containerd[1481]: time="2026-04-17T23:25:19.065736584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:25:19.818412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685037564.mount: Deactivated successfully. Apr 17 23:25:20.375314 containerd[1481]: time="2026-04-17T23:25:20.375207134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:20.375886 containerd[1481]: time="2026-04-17T23:25:20.375816690Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 17 23:25:20.376601 containerd[1481]: time="2026-04-17T23:25:20.376562584Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:20.378896 containerd[1481]: time="2026-04-17T23:25:20.378833824Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.313072352s" Apr 17 23:25:20.378896 containerd[1481]: time="2026-04-17T23:25:20.378865019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:25:20.379002 containerd[1481]: time="2026-04-17T23:25:20.378888821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:20.381137 containerd[1481]: time="2026-04-17T23:25:20.381114609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:25:20.752845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796399007.mount: Deactivated successfully. Apr 17 23:25:21.706876 containerd[1481]: time="2026-04-17T23:25:21.706811461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:21.707604 containerd[1481]: time="2026-04-17T23:25:21.707562423Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 17 23:25:21.708858 containerd[1481]: time="2026-04-17T23:25:21.708819806Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:21.711204 containerd[1481]: time="2026-04-17T23:25:21.711156448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:21.712009 containerd[1481]: time="2026-04-17T23:25:21.711982775Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.330839665s" Apr 17 23:25:21.712009 containerd[1481]: time="2026-04-17T23:25:21.712010555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:25:21.712518 containerd[1481]: time="2026-04-17T23:25:21.712493904Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:25:22.073759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306782597.mount: Deactivated successfully. Apr 17 23:25:22.083467 containerd[1481]: time="2026-04-17T23:25:22.083409045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:22.084290 containerd[1481]: time="2026-04-17T23:25:22.084261698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 23:25:22.085297 containerd[1481]: time="2026-04-17T23:25:22.085265416Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:22.087239 containerd[1481]: time="2026-04-17T23:25:22.087182400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:22.087861 containerd[1481]: time="2026-04-17T23:25:22.087827179Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 375.304793ms" Apr 17 23:25:22.087895 containerd[1481]: time="2026-04-17T23:25:22.087855167Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:25:22.088323 containerd[1481]: time="2026-04-17T23:25:22.088307316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:25:22.536551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310403929.mount: Deactivated successfully. Apr 17 23:25:23.162279 containerd[1481]: time="2026-04-17T23:25:23.162154782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:23.163067 containerd[1481]: time="2026-04-17T23:25:23.162987574Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 17 23:25:23.164302 containerd[1481]: time="2026-04-17T23:25:23.164271857Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:23.167856 containerd[1481]: time="2026-04-17T23:25:23.167806377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:23.168678 containerd[1481]: time="2026-04-17T23:25:23.168616630Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.080219165s" Apr 17 23:25:23.168678 containerd[1481]: time="2026-04-17T23:25:23.168674597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:25:25.826593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:25.837718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:25.860190 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-9.scope)... Apr 17 23:25:25.860208 systemd[1]: Reloading... Apr 17 23:25:25.924326 zram_generator::config[2112]: No configuration found. Apr 17 23:25:25.998994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:25:26.048792 systemd[1]: Reloading finished in 188 ms. Apr 17 23:25:26.087635 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:26.090635 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:25:26.090837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:26.091946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:26.200570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:26.205467 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:25:26.237516 kubelet[2166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:25:26.237516 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:25:26.237516 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:25:26.237876 kubelet[2166]: I0417 23:25:26.237534 2166 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:25:26.569784 kubelet[2166]: I0417 23:25:26.569709 2166 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:25:26.569784 kubelet[2166]: I0417 23:25:26.569742 2166 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:25:26.569981 kubelet[2166]: I0417 23:25:26.569921 2166 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:25:26.591026 kubelet[2166]: E0417 23:25:26.590253 2166 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:25:26.592730 kubelet[2166]: I0417 23:25:26.592709 2166 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:25:26.599787 kubelet[2166]: E0417 23:25:26.599730 2166 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:25:26.599787 kubelet[2166]: I0417 23:25:26.599760 2166 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:25:26.603363 kubelet[2166]: I0417 23:25:26.603319 2166 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:25:26.603544 kubelet[2166]: I0417 23:25:26.603500 2166 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:25:26.603694 kubelet[2166]: I0417 23:25:26.603522 2166 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:25:26.603694 kubelet[2166]: I0417 23:25:26.603681 2166 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:25:26.603694 kubelet[2166]: I0417 23:25:26.603688 2166 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:25:26.603815 kubelet[2166]: I0417 23:25:26.603782 2166 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:25:26.606660 kubelet[2166]: I0417 23:25:26.606608 2166 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:25:26.606660 kubelet[2166]: I0417 23:25:26.606629 2166 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:25:26.606718 kubelet[2166]: I0417 23:25:26.606668 2166 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:25:26.608320 kubelet[2166]: I0417 23:25:26.608148 2166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:25:26.611000 kubelet[2166]: I0417 23:25:26.610963 2166 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:25:26.611966 kubelet[2166]: I0417 23:25:26.611386 2166 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:25:26.611966 kubelet[2166]: E0417 23:25:26.611618 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:25:26.611966 kubelet[2166]: E0417 23:25:26.611738 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:25:26.612267 kubelet[2166]: W0417 23:25:26.612247 2166 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:25:26.615484 kubelet[2166]: I0417 23:25:26.615449 2166 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:25:26.615558 kubelet[2166]: I0417 23:25:26.615501 2166 server.go:1289] "Started kubelet" Apr 17 23:25:26.616019 kubelet[2166]: I0417 23:25:26.615955 2166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:25:26.616403 kubelet[2166]: I0417 23:25:26.616277 2166 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:25:26.616403 kubelet[2166]: I0417 23:25:26.616284 2166 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:25:26.620008 kubelet[2166]: I0417 23:25:26.619964 2166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:25:26.621268 kubelet[2166]: I0417 23:25:26.620760 2166 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:25:26.621268 kubelet[2166]: I0417 23:25:26.621019 2166 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:25:26.621268 kubelet[2166]: E0417 23:25:26.621100 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:26.621268 kubelet[2166]: E0417 23:25:26.620315 2166 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a74887eaa5d448 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:25:26.615471176 +0000 UTC m=+0.405718166,LastTimestamp:2026-04-17 23:25:26.615471176 +0000 UTC m=+0.405718166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:25:26.621412 kubelet[2166]: I0417 23:25:26.621386 2166 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:25:26.621715 kubelet[2166]: I0417 23:25:26.621434 2166 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:25:26.621785 kubelet[2166]: E0417 23:25:26.621750 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:25:26.621856 kubelet[2166]: E0417 23:25:26.621819 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Apr 17 23:25:26.622250 kubelet[2166]: I0417 23:25:26.622201 2166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:25:26.623442 kubelet[2166]: I0417 23:25:26.623414 2166 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:25:26.623544 kubelet[2166]: I0417 23:25:26.623513 2166 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:25:26.624260 kubelet[2166]: E0417 23:25:26.624202 2166 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:25:26.625048 kubelet[2166]: I0417 23:25:26.624997 2166 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:25:26.633193 kubelet[2166]: I0417 23:25:26.633173 2166 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:25:26.633303 kubelet[2166]: I0417 23:25:26.633188 2166 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:25:26.633303 kubelet[2166]: I0417 23:25:26.633285 2166 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:25:26.692567 kubelet[2166]: I0417 23:25:26.692527 2166 policy_none.go:49] "None policy: Start" Apr 17 23:25:26.692567 kubelet[2166]: I0417 23:25:26.692566 2166 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:25:26.692567 kubelet[2166]: I0417 23:25:26.692578 2166 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:25:26.695504 kubelet[2166]: I0417 23:25:26.695472 2166 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:25:26.696607 kubelet[2166]: I0417 23:25:26.696568 2166 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:25:26.696607 kubelet[2166]: I0417 23:25:26.696594 2166 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:25:26.696607 kubelet[2166]: I0417 23:25:26.696613 2166 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:25:26.696607 kubelet[2166]: I0417 23:25:26.696621 2166 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:25:26.696755 kubelet[2166]: E0417 23:25:26.696668 2166 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:25:26.697934 kubelet[2166]: E0417 23:25:26.697855 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:25:26.700856 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:25:26.719397 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:25:26.722198 kubelet[2166]: E0417 23:25:26.722182 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:26.723063 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:25:26.738760 kubelet[2166]: E0417 23:25:26.738238 2166 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:25:26.738760 kubelet[2166]: I0417 23:25:26.738412 2166 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:25:26.738760 kubelet[2166]: I0417 23:25:26.738424 2166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:25:26.738760 kubelet[2166]: I0417 23:25:26.738693 2166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:25:26.739576 kubelet[2166]: E0417 23:25:26.739552 2166 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:25:26.739678 kubelet[2166]: E0417 23:25:26.739659 2166 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:25:26.808797 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 17 23:25:26.822735 kubelet[2166]: E0417 23:25:26.822570 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Apr 17 23:25:26.830254 kubelet[2166]: E0417 23:25:26.830176 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:26.832793 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 17 23:25:26.840885 kubelet[2166]: I0417 23:25:26.840854 2166 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:25:26.841347 kubelet[2166]: E0417 23:25:26.841309 2166 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 17 23:25:26.843364 kubelet[2166]: E0417 23:25:26.843334 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:26.846094 systemd[1]: Created slice kubepods-burstable-pod0009b8000b2dadc48aeb996d473ad6a1.slice - libcontainer container kubepods-burstable-pod0009b8000b2dadc48aeb996d473ad6a1.slice. Apr 17 23:25:26.847752 kubelet[2166]: E0417 23:25:26.847733 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:26.922444 kubelet[2166]: I0417 23:25:26.922384 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:26.922444 kubelet[2166]: I0417 23:25:26.922434 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0009b8000b2dadc48aeb996d473ad6a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0009b8000b2dadc48aeb996d473ad6a1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:26.922444 kubelet[2166]: I0417 23:25:26.922453 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:26.922444 kubelet[2166]: I0417 23:25:26.922468 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:26.922719 kubelet[2166]: I0417 23:25:26.922483 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0009b8000b2dadc48aeb996d473ad6a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0009b8000b2dadc48aeb996d473ad6a1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:26.922719 kubelet[2166]: I0417 23:25:26.922497 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0009b8000b2dadc48aeb996d473ad6a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0009b8000b2dadc48aeb996d473ad6a1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:26.922719 kubelet[2166]: I0417 23:25:26.922539 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:26.922719 kubelet[2166]: I0417 23:25:26.922556 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:26.922719 kubelet[2166]: I0417 23:25:26.922594 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:27.043542 kubelet[2166]: I0417 23:25:27.043448 2166 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:25:27.043976 kubelet[2166]: E0417 23:25:27.043915 2166 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 17 23:25:27.131397 kubelet[2166]: E0417 23:25:27.131140 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:27.132187 containerd[1481]: time="2026-04-17T23:25:27.132031546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:27.144928 kubelet[2166]: E0417 23:25:27.144801 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:27.145491 containerd[1481]: time="2026-04-17T23:25:27.145433774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:27.148872 kubelet[2166]: E0417 23:25:27.148831 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:27.149245 containerd[1481]: time="2026-04-17T23:25:27.149193989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0009b8000b2dadc48aeb996d473ad6a1,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:27.223995 kubelet[2166]: E0417 23:25:27.223921 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Apr 17 23:25:27.446792 kubelet[2166]: I0417 23:25:27.446598 2166 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:25:27.447086 kubelet[2166]: E0417 23:25:27.447018 2166 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Apr 17 23:25:27.554644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597544622.mount: Deactivated successfully. Apr 17 23:25:27.561911 containerd[1481]: time="2026-04-17T23:25:27.561853920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:25:27.563978 containerd[1481]: time="2026-04-17T23:25:27.563777131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:25:27.564699 containerd[1481]: time="2026-04-17T23:25:27.564643284Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:25:27.565528 containerd[1481]: time="2026-04-17T23:25:27.565489431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:25:27.566815 containerd[1481]: time="2026-04-17T23:25:27.566774819Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:25:27.567720 containerd[1481]: time="2026-04-17T23:25:27.567608328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:25:27.568555 containerd[1481]: time="2026-04-17T23:25:27.568502117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:25:27.569766 containerd[1481]: time="2026-04-17T23:25:27.569732324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:25:27.571088 containerd[1481]: time="2026-04-17T23:25:27.571052775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.553931ms" Apr 17 23:25:27.572179 containerd[1481]: time="2026-04-17T23:25:27.572117041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.836601ms" Apr 17 23:25:27.578111 containerd[1481]: time="2026-04-17T23:25:27.578076899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.917469ms" Apr 17 23:25:27.676803 containerd[1481]: time="2026-04-17T23:25:27.676672558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:27.676803 containerd[1481]: time="2026-04-17T23:25:27.676736470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:27.676803 containerd[1481]: time="2026-04-17T23:25:27.676746741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:27.676803 containerd[1481]: time="2026-04-17T23:25:27.676794638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:27.678017 containerd[1481]: time="2026-04-17T23:25:27.677903934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:27.678017 containerd[1481]: time="2026-04-17T23:25:27.677940644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:27.678017 containerd[1481]: time="2026-04-17T23:25:27.677948793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:27.678017 containerd[1481]: time="2026-04-17T23:25:27.677990808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:27.681442 containerd[1481]: time="2026-04-17T23:25:27.681250411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:27.681442 containerd[1481]: time="2026-04-17T23:25:27.681309391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:27.681442 containerd[1481]: time="2026-04-17T23:25:27.681321696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:27.681442 containerd[1481]: time="2026-04-17T23:25:27.681390147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:27.706958 kubelet[2166]: E0417 23:25:27.706841 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:25:27.708446 systemd[1]: Started cri-containerd-232c4489328a399128c8be4d3f2f9e027d63e276800f853fac31c77f81a3925f.scope - libcontainer container 232c4489328a399128c8be4d3f2f9e027d63e276800f853fac31c77f81a3925f. Apr 17 23:25:27.709532 systemd[1]: Started cri-containerd-6e8b347460ac2ae32984ebbbbf3a6857ca8303f912857f2489b8dabcc424e1c1.scope - libcontainer container 6e8b347460ac2ae32984ebbbbf3a6857ca8303f912857f2489b8dabcc424e1c1. Apr 17 23:25:27.710630 systemd[1]: Started cri-containerd-a29f1627a24a241c3438172800d6dac90ac0a0e7bb22fc159c5d842f8b2446ae.scope - libcontainer container a29f1627a24a241c3438172800d6dac90ac0a0e7bb22fc159c5d842f8b2446ae. Apr 17 23:25:27.712375 kubelet[2166]: E0417 23:25:27.712341 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:25:27.752057 containerd[1481]: time="2026-04-17T23:25:27.752013688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e8b347460ac2ae32984ebbbbf3a6857ca8303f912857f2489b8dabcc424e1c1\"" Apr 17 23:25:27.753179 kubelet[2166]: E0417 23:25:27.753151 2166 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:25:27.753437 containerd[1481]: time="2026-04-17T23:25:27.753388794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29f1627a24a241c3438172800d6dac90ac0a0e7bb22fc159c5d842f8b2446ae\"" Apr 17 23:25:27.753893 kubelet[2166]: E0417 23:25:27.753863 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:27.754666 containerd[1481]: time="2026-04-17T23:25:27.754596320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0009b8000b2dadc48aeb996d473ad6a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"232c4489328a399128c8be4d3f2f9e027d63e276800f853fac31c77f81a3925f\"" Apr 17 23:25:27.755495 kubelet[2166]: E0417 23:25:27.755478 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:27.756175 kubelet[2166]: E0417 23:25:27.756147 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:27.760909 containerd[1481]: time="2026-04-17T23:25:27.760861564Z" level=info msg="CreateContainer within sandbox \"232c4489328a399128c8be4d3f2f9e027d63e276800f853fac31c77f81a3925f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:25:27.763520 containerd[1481]: time="2026-04-17T23:25:27.763483212Z" level=info msg="CreateContainer within sandbox \"a29f1627a24a241c3438172800d6dac90ac0a0e7bb22fc159c5d842f8b2446ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:25:27.805631 containerd[1481]: time="2026-04-17T23:25:27.805562491Z" level=info msg="CreateContainer within sandbox \"6e8b347460ac2ae32984ebbbbf3a6857ca8303f912857f2489b8dabcc424e1c1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:25:27.908738 containerd[1481]: time="2026-04-17T23:25:27.908625965Z" level=info msg="CreateContainer within sandbox \"232c4489328a399128c8be4d3f2f9e027d63e276800f853fac31c77f81a3925f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cd0a2d46398f4655dd23181b10382054b0c52d59c432dad824689b1d657c29aa\"" Apr 17 23:25:27.909605 containerd[1481]: time="2026-04-17T23:25:27.909557308Z" level=info msg="StartContainer for \"cd0a2d46398f4655dd23181b10382054b0c52d59c432dad824689b1d657c29aa\"" Apr 17 23:25:27.911645 containerd[1481]: time="2026-04-17T23:25:27.911534803Z" level=info msg="CreateContainer within sandbox \"a29f1627a24a241c3438172800d6dac90ac0a0e7bb22fc159c5d842f8b2446ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5e843bf8bf27f246a84fd62548b3b337151b8e261ba700ca383c3f1c5b3d5d6\"" Apr 17 23:25:27.912137 containerd[1481]: time="2026-04-17T23:25:27.912071535Z" level=info msg="StartContainer for \"d5e843bf8bf27f246a84fd62548b3b337151b8e261ba700ca383c3f1c5b3d5d6\"" Apr 17 23:25:27.912898 containerd[1481]: time="2026-04-17T23:25:27.912865373Z" level=info msg="CreateContainer within sandbox \"6e8b347460ac2ae32984ebbbbf3a6857ca8303f912857f2489b8dabcc424e1c1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ae6662c3417b64105bd2e2ed01a9bc51d1e2280a2f3046bb537d53c991841537\"" Apr 17 23:25:27.913146 containerd[1481]: time="2026-04-17T23:25:27.913122818Z" level=info msg="StartContainer for \"ae6662c3417b64105bd2e2ed01a9bc51d1e2280a2f3046bb537d53c991841537\"" Apr 17 23:25:27.943563 systemd[1]: Started cri-containerd-cd0a2d46398f4655dd23181b10382054b0c52d59c432dad824689b1d657c29aa.scope - libcontainer container cd0a2d46398f4655dd23181b10382054b0c52d59c432dad824689b1d657c29aa. Apr 17 23:25:27.945082 systemd[1]: Started cri-containerd-d5e843bf8bf27f246a84fd62548b3b337151b8e261ba700ca383c3f1c5b3d5d6.scope - libcontainer container d5e843bf8bf27f246a84fd62548b3b337151b8e261ba700ca383c3f1c5b3d5d6. Apr 17 23:25:27.948980 systemd[1]: Started cri-containerd-ae6662c3417b64105bd2e2ed01a9bc51d1e2280a2f3046bb537d53c991841537.scope - libcontainer container ae6662c3417b64105bd2e2ed01a9bc51d1e2280a2f3046bb537d53c991841537. Apr 17 23:25:27.989034 containerd[1481]: time="2026-04-17T23:25:27.988092965Z" level=info msg="StartContainer for \"cd0a2d46398f4655dd23181b10382054b0c52d59c432dad824689b1d657c29aa\" returns successfully" Apr 17 23:25:27.999446 containerd[1481]: time="2026-04-17T23:25:27.999167768Z" level=info msg="StartContainer for \"d5e843bf8bf27f246a84fd62548b3b337151b8e261ba700ca383c3f1c5b3d5d6\" returns successfully" Apr 17 23:25:28.002964 containerd[1481]: time="2026-04-17T23:25:28.002911396Z" level=info msg="StartContainer for \"ae6662c3417b64105bd2e2ed01a9bc51d1e2280a2f3046bb537d53c991841537\" returns successfully" Apr 17 23:25:28.025341 kubelet[2166]: E0417 23:25:28.025235 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Apr 17 23:25:28.250072 kubelet[2166]: I0417 23:25:28.249560 2166 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:25:28.714165 kubelet[2166]: E0417 23:25:28.714099 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:28.714509 kubelet[2166]: E0417 23:25:28.714346 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:28.722369 kubelet[2166]: E0417 23:25:28.722322 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:28.722528 kubelet[2166]: E0417 23:25:28.722463 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:28.725784 kubelet[2166]: E0417 23:25:28.725735 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:28.725927 kubelet[2166]: E0417 23:25:28.725874 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:28.843434 kubelet[2166]: I0417 23:25:28.843344 2166 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:25:28.843434 kubelet[2166]: E0417 23:25:28.843406 2166 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:25:28.860905 kubelet[2166]: E0417 23:25:28.860833 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:28.961566 kubelet[2166]: E0417 23:25:28.961467 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.062849 kubelet[2166]: E0417 23:25:29.062593 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.163629 kubelet[2166]: E0417 23:25:29.163525 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.264371 kubelet[2166]: E0417 23:25:29.264298 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.365426 kubelet[2166]: E0417 23:25:29.365304 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.465786 kubelet[2166]: E0417 23:25:29.465697 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.566881 kubelet[2166]: E0417 23:25:29.566767 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.668293 kubelet[2166]: E0417 23:25:29.667992 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.727392 kubelet[2166]: E0417 23:25:29.727314 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:29.727900 kubelet[2166]: E0417 23:25:29.727434 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:29.727900 kubelet[2166]: E0417 23:25:29.727625 2166 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:25:29.727900 kubelet[2166]: E0417 23:25:29.727702 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:29.769092 kubelet[2166]: E0417 23:25:29.769001 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.869861 kubelet[2166]: E0417 23:25:29.869800 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:29.970823 kubelet[2166]: E0417 23:25:29.970572 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:30.071404 kubelet[2166]: E0417 23:25:30.071324 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:30.171580 kubelet[2166]: E0417 23:25:30.171519 2166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:30.222452 kubelet[2166]: I0417 23:25:30.222272 2166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:30.232164 kubelet[2166]: I0417 23:25:30.231964 2166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:30.238007 kubelet[2166]: I0417 23:25:30.237961 2166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:30.613870 kubelet[2166]: I0417 23:25:30.613835 2166 apiserver.go:52] "Watching apiserver" Apr 17 23:25:30.616407 kubelet[2166]: E0417 23:25:30.616368 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:30.621792 kubelet[2166]: I0417 23:25:30.621767 2166 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:25:30.728029 kubelet[2166]: I0417 23:25:30.727978 2166 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:30.728380 kubelet[2166]: E0417 23:25:30.728258 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:30.734939 kubelet[2166]: E0417 23:25:30.734902 2166 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:30.735107 kubelet[2166]: E0417 23:25:30.735057 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:31.147955 systemd[1]: Reloading requested from client PID 2451 ('systemctl') (unit session-9.scope)... Apr 17 23:25:31.147970 systemd[1]: Reloading... Apr 17 23:25:31.220277 zram_generator::config[2490]: No configuration found. Apr 17 23:25:31.294062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:25:31.347634 systemd[1]: Reloading finished in 199 ms. Apr 17 23:25:31.382517 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:31.406143 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:25:31.406378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:31.416773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:25:31.513729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:25:31.517241 (kubelet)[2535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:25:31.549190 kubelet[2535]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:25:31.549190 kubelet[2535]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:25:31.549190 kubelet[2535]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:25:31.549565 kubelet[2535]: I0417 23:25:31.549211 2535 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:25:31.554007 kubelet[2535]: I0417 23:25:31.553961 2535 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:25:31.554007 kubelet[2535]: I0417 23:25:31.553977 2535 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:25:31.554148 kubelet[2535]: I0417 23:25:31.554141 2535 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:25:31.556023 kubelet[2535]: I0417 23:25:31.555967 2535 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:25:31.558269 kubelet[2535]: I0417 23:25:31.558249 2535 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:25:31.560931 kubelet[2535]: E0417 23:25:31.560901 2535 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:25:31.560931 kubelet[2535]: I0417 23:25:31.560924 2535 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:25:31.564197 kubelet[2535]: I0417 23:25:31.564166 2535 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:25:31.564358 kubelet[2535]: I0417 23:25:31.564334 2535 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:25:31.564521 kubelet[2535]: I0417 23:25:31.564360 2535 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:25:31.564521 kubelet[2535]: I0417 23:25:31.564514 2535 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:25:31.564645 kubelet[2535]: I0417 23:25:31.564524 2535 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:25:31.564645 kubelet[2535]: I0417 23:25:31.564568 2535 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:25:31.564746 kubelet[2535]: I0417 23:25:31.564727 2535 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:25:31.564746 kubelet[2535]: I0417 23:25:31.564741 2535 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:25:31.564779 kubelet[2535]: I0417 23:25:31.564757 2535 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:25:31.564779 kubelet[2535]: I0417 23:25:31.564767 2535 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:25:31.565450 kubelet[2535]: I0417 23:25:31.565432 2535 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:25:31.568044 kubelet[2535]: I0417 23:25:31.566286 2535 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:25:31.570488 kubelet[2535]: I0417 23:25:31.570094 2535 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:25:31.570488 kubelet[2535]: I0417 23:25:31.570125 2535 server.go:1289] "Started kubelet" Apr 17 23:25:31.570838 kubelet[2535]: I0417 23:25:31.570808 2535 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:25:31.571009 kubelet[2535]: I0417 23:25:31.570975 2535 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:25:31.572145 kubelet[2535]: I0417 23:25:31.571158 2535 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:25:31.572201 kubelet[2535]: I0417 23:25:31.572150 2535 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:25:31.572517 kubelet[2535]: I0417 23:25:31.572483 2535 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:25:31.573889 kubelet[2535]: I0417 23:25:31.573844 2535 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:25:31.575331 kubelet[2535]: I0417 23:25:31.575310 2535 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:25:31.575490 kubelet[2535]: E0417 23:25:31.575466 2535 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:25:31.580263 kubelet[2535]: I0417 23:25:31.580200 2535 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:25:31.580842 kubelet[2535]: I0417 23:25:31.580734 2535 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:25:31.583021 kubelet[2535]: I0417 23:25:31.583009 2535 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:25:31.583085 kubelet[2535]: I0417 23:25:31.583080 2535 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:25:31.583167 kubelet[2535]: I0417 23:25:31.583156 2535 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:25:31.583425 kubelet[2535]: E0417 23:25:31.583411 2535 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:25:31.589181 kubelet[2535]: I0417 23:25:31.589158 2535 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:25:31.590317 kubelet[2535]: I0417 23:25:31.590200 2535 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:25:31.590317 kubelet[2535]: I0417 23:25:31.590294 2535 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:25:31.590317 kubelet[2535]: I0417 23:25:31.590308 2535 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:25:31.590317 kubelet[2535]: I0417 23:25:31.590318 2535 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:25:31.590405 kubelet[2535]: E0417 23:25:31.590346 2535 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:25:31.609797 kubelet[2535]: I0417 23:25:31.609774 2535 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:25:31.609938 kubelet[2535]: I0417 23:25:31.609931 2535 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:25:31.610033 kubelet[2535]: I0417 23:25:31.610003 2535 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:25:31.610510 kubelet[2535]: I0417 23:25:31.610484 2535 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:25:31.610558 kubelet[2535]: I0417 23:25:31.610506 2535 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:25:31.610558 kubelet[2535]: I0417 23:25:31.610521 2535 policy_none.go:49] "None policy: Start" Apr 17 23:25:31.610558 kubelet[2535]: I0417 23:25:31.610531 2535 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:25:31.610558 kubelet[2535]: I0417 23:25:31.610540 2535 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:25:31.610624 kubelet[2535]: I0417 23:25:31.610611 2535 state_mem.go:75] "Updated machine memory state" Apr 17 23:25:31.613760 kubelet[2535]: E0417 23:25:31.613615 2535 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:25:31.613760 kubelet[2535]: I0417 23:25:31.613744 2535 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:25:31.613760 kubelet[2535]: I0417 23:25:31.613751 2535 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:25:31.613914 kubelet[2535]: I0417 23:25:31.613887 2535 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:25:31.614704 kubelet[2535]: E0417 23:25:31.614544 2535 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:25:31.691593 kubelet[2535]: I0417 23:25:31.691469 2535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:31.692515 kubelet[2535]: I0417 23:25:31.691524 2535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.692639 kubelet[2535]: I0417 23:25:31.691555 2535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:31.698768 kubelet[2535]: E0417 23:25:31.698721 2535 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:31.700040 kubelet[2535]: E0417 23:25:31.699968 2535 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:31.700040 kubelet[2535]: E0417 23:25:31.700013 2535 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.717840 kubelet[2535]: I0417 23:25:31.717820 2535 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:25:31.724090 kubelet[2535]: I0417 23:25:31.724060 2535 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 23:25:31.724202 kubelet[2535]: I0417 23:25:31.724126 2535 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:25:31.782498 kubelet[2535]: I0417 23:25:31.782436 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0009b8000b2dadc48aeb996d473ad6a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0009b8000b2dadc48aeb996d473ad6a1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:31.782498 kubelet[2535]: I0417 23:25:31.782488 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.782498 kubelet[2535]: I0417 23:25:31.782505 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.782498 kubelet[2535]: I0417 23:25:31.782517 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0009b8000b2dadc48aeb996d473ad6a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0009b8000b2dadc48aeb996d473ad6a1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:31.782795 kubelet[2535]: I0417 23:25:31.782541 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0009b8000b2dadc48aeb996d473ad6a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0009b8000b2dadc48aeb996d473ad6a1\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:31.782795 kubelet[2535]: I0417 23:25:31.782556 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.782795 kubelet[2535]: I0417 23:25:31.782593 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.782795 kubelet[2535]: I0417 23:25:31.782635 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:25:31.782795 kubelet[2535]: I0417 23:25:31.782657 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:25:31.999809 kubelet[2535]: E0417 23:25:31.999550 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:32.000388 kubelet[2535]: E0417 23:25:32.000340 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:32.000503 kubelet[2535]: E0417 23:25:32.000480 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:32.145321 sudo[2579]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:25:32.145545 sudo[2579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:25:32.565565 kubelet[2535]: I0417 23:25:32.565506 2535 apiserver.go:52] "Watching apiserver" Apr 17 23:25:32.580521 kubelet[2535]: I0417 23:25:32.580413 2535 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:25:32.594894 sudo[2579]: pam_unix(sudo:session): session closed for user root Apr 17 23:25:32.598957 kubelet[2535]: E0417 23:25:32.598925 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:32.599246 kubelet[2535]: E0417 23:25:32.599196 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:32.599409 kubelet[2535]: I0417 23:25:32.599393 2535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:32.607361 kubelet[2535]: E0417 23:25:32.607305 2535 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:25:32.607793 kubelet[2535]: E0417 23:25:32.607596 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:32.618189 kubelet[2535]: I0417 23:25:32.618043 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.618022587 podStartE2EDuration="2.618022587s" podCreationTimestamp="2026-04-17 23:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:25:32.617874046 +0000 UTC m=+1.097666819" watchObservedRunningTime="2026-04-17 23:25:32.618022587 +0000 UTC m=+1.097815349" Apr 17 23:25:32.625707 kubelet[2535]: I0417 23:25:32.625394 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.625372561 podStartE2EDuration="2.625372561s" podCreationTimestamp="2026-04-17 23:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:25:32.624493224 +0000 UTC m=+1.104285992" watchObservedRunningTime="2026-04-17 23:25:32.625372561 +0000 UTC m=+1.105165331" Apr 17 23:25:32.642930 kubelet[2535]: I0417 23:25:32.642068 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.642047539 podStartE2EDuration="2.642047539s" podCreationTimestamp="2026-04-17 23:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:25:32.631874404 +0000 UTC m=+1.111667185" watchObservedRunningTime="2026-04-17 23:25:32.642047539 +0000 UTC m=+1.121840322" Apr 17 23:25:33.600902 kubelet[2535]: E0417 23:25:33.600845 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:33.601275 kubelet[2535]: E0417 23:25:33.600968 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:34.107419 sudo[1670]: pam_unix(sudo:session): session closed for user root Apr 17 23:25:34.109363 sshd[1667]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:34.112185 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:43820.service: Deactivated successfully. Apr 17 23:25:34.113400 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:25:34.113529 systemd[1]: session-9.scope: Consumed 4.940s CPU time, 162.0M memory peak, 0B memory swap peak. Apr 17 23:25:34.113961 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:25:34.114767 systemd-logind[1462]: Removed session 9. Apr 17 23:25:34.601564 kubelet[2535]: E0417 23:25:34.601532 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:35.257117 kubelet[2535]: E0417 23:25:35.257038 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:35.603604 kubelet[2535]: E0417 23:25:35.603552 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:35.729943 kubelet[2535]: I0417 23:25:35.729899 2535 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:25:35.730280 containerd[1481]: time="2026-04-17T23:25:35.730249663Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:25:35.730537 kubelet[2535]: I0417 23:25:35.730448 2535 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:25:36.838526 systemd[1]: Created slice kubepods-besteffort-pod9268c009_19ac_4ff9_917a_6a0dfc436a2d.slice - libcontainer container kubepods-besteffort-pod9268c009_19ac_4ff9_917a_6a0dfc436a2d.slice. Apr 17 23:25:36.849269 systemd[1]: Created slice kubepods-burstable-poda75f119e_5188_4013_ac5c_55bcd5b130b6.slice - libcontainer container kubepods-burstable-poda75f119e_5188_4013_ac5c_55bcd5b130b6.slice. Apr 17 23:25:36.918020 kubelet[2535]: I0417 23:25:36.917985 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-cgroup\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918020 kubelet[2535]: I0417 23:25:36.918013 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-lib-modules\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918020 kubelet[2535]: I0417 23:25:36.918028 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a75f119e-5188-4013-ac5c-55bcd5b130b6-clustermesh-secrets\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918020 kubelet[2535]: I0417 23:25:36.918040 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-config-path\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918505 kubelet[2535]: I0417 23:25:36.918066 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9268c009-19ac-4ff9-917a-6a0dfc436a2d-xtables-lock\") pod \"kube-proxy-zfht8\" (UID: \"9268c009-19ac-4ff9-917a-6a0dfc436a2d\") " pod="kube-system/kube-proxy-zfht8" Apr 17 23:25:36.918505 kubelet[2535]: I0417 23:25:36.918077 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9268c009-19ac-4ff9-917a-6a0dfc436a2d-lib-modules\") pod \"kube-proxy-zfht8\" (UID: \"9268c009-19ac-4ff9-917a-6a0dfc436a2d\") " pod="kube-system/kube-proxy-zfht8" Apr 17 23:25:36.918505 kubelet[2535]: I0417 23:25:36.918104 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-bpf-maps\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918505 kubelet[2535]: I0417 23:25:36.918117 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-kernel\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918505 kubelet[2535]: I0417 23:25:36.918129 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-hubble-tls\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918505 kubelet[2535]: I0417 23:25:36.918144 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-net\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918614 kubelet[2535]: I0417 23:25:36.918155 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6twr\" (UniqueName: \"kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-kube-api-access-l6twr\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918614 kubelet[2535]: I0417 23:25:36.918168 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-run\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918614 kubelet[2535]: I0417 23:25:36.918181 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-hostproc\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918614 kubelet[2535]: I0417 23:25:36.918190 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cni-path\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918614 kubelet[2535]: I0417 23:25:36.918202 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-etc-cni-netd\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918614 kubelet[2535]: I0417 23:25:36.918236 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-xtables-lock\") pod \"cilium-ppwrb\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " pod="kube-system/cilium-ppwrb" Apr 17 23:25:36.918717 kubelet[2535]: I0417 23:25:36.918249 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9268c009-19ac-4ff9-917a-6a0dfc436a2d-kube-proxy\") pod \"kube-proxy-zfht8\" (UID: \"9268c009-19ac-4ff9-917a-6a0dfc436a2d\") " pod="kube-system/kube-proxy-zfht8" Apr 17 23:25:36.918717 kubelet[2535]: I0417 23:25:36.918263 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bfm\" (UniqueName: \"kubernetes.io/projected/9268c009-19ac-4ff9-917a-6a0dfc436a2d-kube-api-access-56bfm\") pod \"kube-proxy-zfht8\" (UID: \"9268c009-19ac-4ff9-917a-6a0dfc436a2d\") " pod="kube-system/kube-proxy-zfht8" Apr 17 23:25:36.921516 systemd[1]: Created slice kubepods-besteffort-pod175ed8f4_6d81_4c39_b9f1_2ff3b73ffea7.slice - libcontainer container kubepods-besteffort-pod175ed8f4_6d81_4c39_b9f1_2ff3b73ffea7.slice. Apr 17 23:25:37.019511 kubelet[2535]: I0417 23:25:37.019405 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28zg\" (UniqueName: \"kubernetes.io/projected/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-kube-api-access-q28zg\") pod \"cilium-operator-6c4d7847fc-ptn2w\" (UID: \"175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7\") " pod="kube-system/cilium-operator-6c4d7847fc-ptn2w" Apr 17 23:25:37.019511 kubelet[2535]: I0417 23:25:37.019460 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ptn2w\" (UID: \"175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7\") " pod="kube-system/cilium-operator-6c4d7847fc-ptn2w" Apr 17 23:25:37.147933 kubelet[2535]: E0417 23:25:37.147758 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:37.148437 containerd[1481]: time="2026-04-17T23:25:37.148377546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfht8,Uid:9268c009-19ac-4ff9-917a-6a0dfc436a2d,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:37.151307 kubelet[2535]: E0417 23:25:37.151247 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:37.151945 containerd[1481]: time="2026-04-17T23:25:37.151852376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ppwrb,Uid:a75f119e-5188-4013-ac5c-55bcd5b130b6,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:37.178420 containerd[1481]: time="2026-04-17T23:25:37.178329681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:37.178420 containerd[1481]: time="2026-04-17T23:25:37.178375272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:37.178420 containerd[1481]: time="2026-04-17T23:25:37.178387290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:37.179448 containerd[1481]: time="2026-04-17T23:25:37.178997879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:37.179448 containerd[1481]: time="2026-04-17T23:25:37.179035260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:37.179448 containerd[1481]: time="2026-04-17T23:25:37.179043730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:37.179648 containerd[1481]: time="2026-04-17T23:25:37.179449030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:37.179648 containerd[1481]: time="2026-04-17T23:25:37.179170901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:37.204558 systemd[1]: Started cri-containerd-4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43.scope - libcontainer container 4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43. Apr 17 23:25:37.205850 systemd[1]: Started cri-containerd-c0d79b54610cff92b4afcd99313dba12fde6b0f23c0322b995bcb5bde6fbc921.scope - libcontainer container c0d79b54610cff92b4afcd99313dba12fde6b0f23c0322b995bcb5bde6fbc921. Apr 17 23:25:37.223116 containerd[1481]: time="2026-04-17T23:25:37.223088780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ppwrb,Uid:a75f119e-5188-4013-ac5c-55bcd5b130b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\"" Apr 17 23:25:37.223619 containerd[1481]: time="2026-04-17T23:25:37.223594929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfht8,Uid:9268c009-19ac-4ff9-917a-6a0dfc436a2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0d79b54610cff92b4afcd99313dba12fde6b0f23c0322b995bcb5bde6fbc921\"" Apr 17 23:25:37.224253 kubelet[2535]: E0417 23:25:37.224176 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:37.224722 kubelet[2535]: E0417 23:25:37.224606 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:37.225486 containerd[1481]: time="2026-04-17T23:25:37.224942760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ptn2w,Uid:175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:37.225540 kubelet[2535]: E0417 23:25:37.225080 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:37.226426 containerd[1481]: time="2026-04-17T23:25:37.226385539Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:25:37.230784 containerd[1481]: time="2026-04-17T23:25:37.230391079Z" level=info msg="CreateContainer within sandbox \"c0d79b54610cff92b4afcd99313dba12fde6b0f23c0322b995bcb5bde6fbc921\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:25:37.250405 containerd[1481]: time="2026-04-17T23:25:37.250364288Z" level=info msg="CreateContainer within sandbox \"c0d79b54610cff92b4afcd99313dba12fde6b0f23c0322b995bcb5bde6fbc921\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2018f8fca523a0333c643c605c6ecc7794f5646dc5ae02c051e44891c1823256\"" Apr 17 23:25:37.251853 containerd[1481]: time="2026-04-17T23:25:37.251747157Z" level=info msg="StartContainer for \"2018f8fca523a0333c643c605c6ecc7794f5646dc5ae02c051e44891c1823256\"" Apr 17 23:25:37.264735 containerd[1481]: time="2026-04-17T23:25:37.264334918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:37.264735 containerd[1481]: time="2026-04-17T23:25:37.264577596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:37.264735 containerd[1481]: time="2026-04-17T23:25:37.264593947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:37.264989 containerd[1481]: time="2026-04-17T23:25:37.264813569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:37.291785 systemd[1]: Started cri-containerd-b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b.scope - libcontainer container b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b. Apr 17 23:25:37.294741 systemd[1]: Started cri-containerd-2018f8fca523a0333c643c605c6ecc7794f5646dc5ae02c051e44891c1823256.scope - libcontainer container 2018f8fca523a0333c643c605c6ecc7794f5646dc5ae02c051e44891c1823256. Apr 17 23:25:37.319482 containerd[1481]: time="2026-04-17T23:25:37.319393783Z" level=info msg="StartContainer for \"2018f8fca523a0333c643c605c6ecc7794f5646dc5ae02c051e44891c1823256\" returns successfully" Apr 17 23:25:37.333148 containerd[1481]: time="2026-04-17T23:25:37.333086261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ptn2w,Uid:175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b\"" Apr 17 23:25:37.335995 kubelet[2535]: E0417 23:25:37.335760 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:37.611269 kubelet[2535]: E0417 23:25:37.609808 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:39.739466 kubelet[2535]: E0417 23:25:39.739382 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:39.754452 kubelet[2535]: I0417 23:25:39.754103 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfht8" podStartSLOduration=3.754077201 podStartE2EDuration="3.754077201s" podCreationTimestamp="2026-04-17 23:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:25:37.623690614 +0000 UTC m=+6.103483378" watchObservedRunningTime="2026-04-17 23:25:39.754077201 +0000 UTC m=+8.233869975" Apr 17 23:25:40.615689 kubelet[2535]: E0417 23:25:40.615339 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:45.261343 kubelet[2535]: E0417 23:25:45.261279 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:45.509668 kubelet[2535]: E0417 23:25:45.509553 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:45.625882 kubelet[2535]: E0417 23:25:45.625744 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:45.626093 kubelet[2535]: E0417 23:25:45.626005 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:47.431796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471703784.mount: Deactivated successfully. Apr 17 23:25:47.860371 update_engine[1463]: I20260417 23:25:47.860280 1463 update_attempter.cc:509] Updating boot flags... Apr 17 23:25:47.886248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (2945) Apr 17 23:25:47.936805 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (2948) Apr 17 23:25:47.972685 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (2948) Apr 17 23:25:48.658645 containerd[1481]: time="2026-04-17T23:25:48.658586735Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:48.659072 containerd[1481]: time="2026-04-17T23:25:48.659029380Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:25:48.660648 containerd[1481]: time="2026-04-17T23:25:48.660600611Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:48.661836 containerd[1481]: time="2026-04-17T23:25:48.661806725Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.435384133s" Apr 17 23:25:48.661895 containerd[1481]: time="2026-04-17T23:25:48.661837383Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:25:48.665750 containerd[1481]: time="2026-04-17T23:25:48.665704366Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:25:48.669769 containerd[1481]: time="2026-04-17T23:25:48.669675960Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:25:48.682647 containerd[1481]: time="2026-04-17T23:25:48.682599963Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\"" Apr 17 23:25:48.683115 containerd[1481]: time="2026-04-17T23:25:48.683085560Z" level=info msg="StartContainer for \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\"" Apr 17 23:25:48.706377 systemd[1]: Started cri-containerd-2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158.scope - libcontainer container 2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158. Apr 17 23:25:48.738707 systemd[1]: cri-containerd-2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158.scope: Deactivated successfully. Apr 17 23:25:48.781967 containerd[1481]: time="2026-04-17T23:25:48.781811073Z" level=info msg="StartContainer for \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\" returns successfully" Apr 17 23:25:48.833437 containerd[1481]: time="2026-04-17T23:25:48.833286295Z" level=info msg="shim disconnected" id=2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158 namespace=k8s.io Apr 17 23:25:48.833437 containerd[1481]: time="2026-04-17T23:25:48.833457790Z" level=warning msg="cleaning up after shim disconnected" id=2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158 namespace=k8s.io Apr 17 23:25:48.833692 containerd[1481]: time="2026-04-17T23:25:48.833476324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:25:49.636558 kubelet[2535]: E0417 23:25:49.636521 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:49.644864 containerd[1481]: time="2026-04-17T23:25:49.644739979Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:25:49.661042 containerd[1481]: time="2026-04-17T23:25:49.660408788Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\"" Apr 17 23:25:49.661042 containerd[1481]: time="2026-04-17T23:25:49.660941685Z" level=info msg="StartContainer for \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\"" Apr 17 23:25:49.679430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158-rootfs.mount: Deactivated successfully. Apr 17 23:25:49.692426 systemd[1]: Started cri-containerd-20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb.scope - libcontainer container 20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb. Apr 17 23:25:49.712676 containerd[1481]: time="2026-04-17T23:25:49.712620389Z" level=info msg="StartContainer for \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\" returns successfully" Apr 17 23:25:49.722713 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:25:49.723421 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:25:49.723488 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:25:49.728675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:25:49.728961 systemd[1]: cri-containerd-20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb.scope: Deactivated successfully. Apr 17 23:25:49.747046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb-rootfs.mount: Deactivated successfully. Apr 17 23:25:49.748861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:25:49.751689 containerd[1481]: time="2026-04-17T23:25:49.751611953Z" level=info msg="shim disconnected" id=20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb namespace=k8s.io Apr 17 23:25:49.751689 containerd[1481]: time="2026-04-17T23:25:49.751676958Z" level=warning msg="cleaning up after shim disconnected" id=20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb namespace=k8s.io Apr 17 23:25:49.751689 containerd[1481]: time="2026-04-17T23:25:49.751684766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:25:49.765067 containerd[1481]: time="2026-04-17T23:25:49.764999284Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:25:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:25:50.640543 kubelet[2535]: E0417 23:25:50.640470 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:50.646272 containerd[1481]: time="2026-04-17T23:25:50.646158596Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:25:50.663250 containerd[1481]: time="2026-04-17T23:25:50.663142307Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\"" Apr 17 23:25:50.664039 containerd[1481]: time="2026-04-17T23:25:50.664003736Z" level=info msg="StartContainer for \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\"" Apr 17 23:25:50.687405 systemd[1]: run-containerd-runc-k8s.io-ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1-runc.MGnhAs.mount: Deactivated successfully. Apr 17 23:25:50.697455 systemd[1]: Started cri-containerd-ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1.scope - libcontainer container ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1. Apr 17 23:25:50.721647 containerd[1481]: time="2026-04-17T23:25:50.721553359Z" level=info msg="StartContainer for \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\" returns successfully" Apr 17 23:25:50.722308 systemd[1]: cri-containerd-ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1.scope: Deactivated successfully. Apr 17 23:25:50.743178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1-rootfs.mount: Deactivated successfully. Apr 17 23:25:50.758235 containerd[1481]: time="2026-04-17T23:25:50.758160018Z" level=info msg="shim disconnected" id=ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1 namespace=k8s.io Apr 17 23:25:50.758235 containerd[1481]: time="2026-04-17T23:25:50.758208410Z" level=warning msg="cleaning up after shim disconnected" id=ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1 namespace=k8s.io Apr 17 23:25:50.758235 containerd[1481]: time="2026-04-17T23:25:50.758243194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:25:51.136554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386629517.mount: Deactivated successfully. Apr 17 23:25:51.645059 kubelet[2535]: E0417 23:25:51.644993 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:51.654765 containerd[1481]: time="2026-04-17T23:25:51.654706610Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:25:51.670745 containerd[1481]: time="2026-04-17T23:25:51.670643695Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\"" Apr 17 23:25:51.671270 containerd[1481]: time="2026-04-17T23:25:51.671194860Z" level=info msg="StartContainer for \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\"" Apr 17 23:25:51.705477 systemd[1]: Started cri-containerd-ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414.scope - libcontainer container ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414. Apr 17 23:25:51.727807 systemd[1]: cri-containerd-ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414.scope: Deactivated successfully. Apr 17 23:25:51.733107 containerd[1481]: time="2026-04-17T23:25:51.733044720Z" level=info msg="StartContainer for \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\" returns successfully" Apr 17 23:25:51.754058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414-rootfs.mount: Deactivated successfully. Apr 17 23:25:51.767298 containerd[1481]: time="2026-04-17T23:25:51.767199506Z" level=info msg="shim disconnected" id=ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414 namespace=k8s.io Apr 17 23:25:51.767298 containerd[1481]: time="2026-04-17T23:25:51.767291101Z" level=warning msg="cleaning up after shim disconnected" id=ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414 namespace=k8s.io Apr 17 23:25:51.767298 containerd[1481]: time="2026-04-17T23:25:51.767302562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:25:51.979486 containerd[1481]: time="2026-04-17T23:25:51.979304902Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:51.980405 containerd[1481]: time="2026-04-17T23:25:51.980189109Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:25:51.981997 containerd[1481]: time="2026-04-17T23:25:51.981938971Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:25:51.983362 containerd[1481]: time="2026-04-17T23:25:51.983290216Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.317544302s" Apr 17 23:25:51.983412 containerd[1481]: time="2026-04-17T23:25:51.983360501Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:25:51.995690 containerd[1481]: time="2026-04-17T23:25:51.995644305Z" level=info msg="CreateContainer within sandbox \"b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:25:52.008142 containerd[1481]: time="2026-04-17T23:25:52.008091538Z" level=info msg="CreateContainer within sandbox \"b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\"" Apr 17 23:25:52.008684 containerd[1481]: time="2026-04-17T23:25:52.008617894Z" level=info msg="StartContainer for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\"" Apr 17 23:25:52.032520 systemd[1]: Started cri-containerd-10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142.scope - libcontainer container 10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142. Apr 17 23:25:52.080524 containerd[1481]: time="2026-04-17T23:25:52.080457531Z" level=info msg="StartContainer for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" returns successfully" Apr 17 23:25:52.657552 kubelet[2535]: E0417 23:25:52.657497 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:52.660396 kubelet[2535]: E0417 23:25:52.660351 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:52.669259 containerd[1481]: time="2026-04-17T23:25:52.668265364Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:25:52.722411 kubelet[2535]: I0417 23:25:52.722350 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ptn2w" podStartSLOduration=2.075484051 podStartE2EDuration="16.722303776s" podCreationTimestamp="2026-04-17 23:25:36 +0000 UTC" firstStartedPulling="2026-04-17 23:25:37.337352226 +0000 UTC m=+5.817144988" lastFinishedPulling="2026-04-17 23:25:51.984171951 +0000 UTC m=+20.463964713" observedRunningTime="2026-04-17 23:25:52.685321515 +0000 UTC m=+21.165114276" watchObservedRunningTime="2026-04-17 23:25:52.722303776 +0000 UTC m=+21.202096549" Apr 17 23:25:52.765982 containerd[1481]: time="2026-04-17T23:25:52.765910840Z" level=info msg="CreateContainer within sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\"" Apr 17 23:25:52.766757 containerd[1481]: time="2026-04-17T23:25:52.766729336Z" level=info msg="StartContainer for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\"" Apr 17 23:25:52.813441 systemd[1]: Started cri-containerd-8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc.scope - libcontainer container 8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc. Apr 17 23:25:52.834999 containerd[1481]: time="2026-04-17T23:25:52.834941894Z" level=info msg="StartContainer for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" returns successfully" Apr 17 23:25:52.957271 kubelet[2535]: I0417 23:25:52.956802 2535 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:25:52.997753 systemd[1]: Created slice kubepods-burstable-podee15ec71_0ca6_4b64_99af_93bd76c00973.slice - libcontainer container kubepods-burstable-podee15ec71_0ca6_4b64_99af_93bd76c00973.slice. Apr 17 23:25:53.004925 systemd[1]: Created slice kubepods-burstable-pod43bbc33e_786c_488b_9351_c7e28217b580.slice - libcontainer container kubepods-burstable-pod43bbc33e_786c_488b_9351_c7e28217b580.slice. Apr 17 23:25:53.061020 kubelet[2535]: I0417 23:25:53.060966 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43bbc33e-786c-488b-9351-c7e28217b580-config-volume\") pod \"coredns-674b8bbfcf-rpchm\" (UID: \"43bbc33e-786c-488b-9351-c7e28217b580\") " pod="kube-system/coredns-674b8bbfcf-rpchm" Apr 17 23:25:53.061020 kubelet[2535]: I0417 23:25:53.061016 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee15ec71-0ca6-4b64-99af-93bd76c00973-config-volume\") pod \"coredns-674b8bbfcf-26vp4\" (UID: \"ee15ec71-0ca6-4b64-99af-93bd76c00973\") " pod="kube-system/coredns-674b8bbfcf-26vp4" Apr 17 23:25:53.061020 kubelet[2535]: I0417 23:25:53.061034 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsnt\" (UniqueName: \"kubernetes.io/projected/43bbc33e-786c-488b-9351-c7e28217b580-kube-api-access-tmsnt\") pod \"coredns-674b8bbfcf-rpchm\" (UID: \"43bbc33e-786c-488b-9351-c7e28217b580\") " pod="kube-system/coredns-674b8bbfcf-rpchm" Apr 17 23:25:53.061020 kubelet[2535]: I0417 23:25:53.061048 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wmd\" (UniqueName: \"kubernetes.io/projected/ee15ec71-0ca6-4b64-99af-93bd76c00973-kube-api-access-s5wmd\") pod \"coredns-674b8bbfcf-26vp4\" (UID: \"ee15ec71-0ca6-4b64-99af-93bd76c00973\") " pod="kube-system/coredns-674b8bbfcf-26vp4" Apr 17 23:25:53.301340 kubelet[2535]: E0417 23:25:53.301068 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:53.304256 containerd[1481]: time="2026-04-17T23:25:53.304163050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vp4,Uid:ee15ec71-0ca6-4b64-99af-93bd76c00973,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:53.310577 kubelet[2535]: E0417 23:25:53.310534 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:53.311071 containerd[1481]: time="2026-04-17T23:25:53.311030249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rpchm,Uid:43bbc33e-786c-488b-9351-c7e28217b580,Namespace:kube-system,Attempt:0,}" Apr 17 23:25:53.664851 kubelet[2535]: E0417 23:25:53.664755 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:53.665180 kubelet[2535]: E0417 23:25:53.664872 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:53.682182 kubelet[2535]: I0417 23:25:53.682107 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ppwrb" podStartSLOduration=6.242561045 podStartE2EDuration="17.682086442s" podCreationTimestamp="2026-04-17 23:25:36 +0000 UTC" firstStartedPulling="2026-04-17 23:25:37.225991551 +0000 UTC m=+5.705784314" lastFinishedPulling="2026-04-17 23:25:48.665516946 +0000 UTC m=+17.145309711" observedRunningTime="2026-04-17 23:25:53.681965609 +0000 UTC m=+22.161758380" watchObservedRunningTime="2026-04-17 23:25:53.682086442 +0000 UTC m=+22.161879213" Apr 17 23:25:54.667209 kubelet[2535]: E0417 23:25:54.667148 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:55.566389 systemd-networkd[1394]: cilium_host: Link UP Apr 17 23:25:55.566483 systemd-networkd[1394]: cilium_net: Link UP Apr 17 23:25:55.566573 systemd-networkd[1394]: cilium_net: Gained carrier Apr 17 23:25:55.566686 systemd-networkd[1394]: cilium_host: Gained carrier Apr 17 23:25:55.640337 systemd-networkd[1394]: cilium_vxlan: Link UP Apr 17 23:25:55.640343 systemd-networkd[1394]: cilium_vxlan: Gained carrier Apr 17 23:25:55.668899 kubelet[2535]: E0417 23:25:55.668859 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:55.814262 kernel: NET: Registered PF_ALG protocol family Apr 17 23:25:56.276445 systemd-networkd[1394]: cilium_net: Gained IPv6LL Apr 17 23:25:56.340817 systemd-networkd[1394]: lxc_health: Link UP Apr 17 23:25:56.347568 systemd-networkd[1394]: lxc_health: Gained carrier Apr 17 23:25:56.404359 systemd-networkd[1394]: cilium_host: Gained IPv6LL Apr 17 23:25:56.865727 systemd-networkd[1394]: lxc37398bad4213: Link UP Apr 17 23:25:56.872261 kernel: eth0: renamed from tmp6151f Apr 17 23:25:56.880808 systemd-networkd[1394]: lxc37398bad4213: Gained carrier Apr 17 23:25:56.885149 systemd-networkd[1394]: lxc123987730be8: Link UP Apr 17 23:25:56.893282 kernel: eth0: renamed from tmp41065 Apr 17 23:25:56.901444 systemd-networkd[1394]: lxc123987730be8: Gained carrier Apr 17 23:25:57.153573 kubelet[2535]: E0417 23:25:57.153434 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:57.427506 systemd-networkd[1394]: lxc_health: Gained IPv6LL Apr 17 23:25:57.555457 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Apr 17 23:25:57.640369 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:56726.service - OpenSSH per-connection server daemon (10.0.0.1:56726). Apr 17 23:25:57.675253 kubelet[2535]: E0417 23:25:57.672987 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:57.677263 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 56726 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:25:57.678499 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:25:57.683863 systemd-logind[1462]: New session 10 of user core. Apr 17 23:25:57.690406 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:25:57.859941 sshd[3768]: pam_unix(sshd:session): session closed for user core Apr 17 23:25:57.862872 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:56726.service: Deactivated successfully. Apr 17 23:25:57.864134 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:25:57.864919 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:25:57.865743 systemd-logind[1462]: Removed session 10. Apr 17 23:25:58.327321 systemd-networkd[1394]: lxc37398bad4213: Gained IPv6LL Apr 17 23:25:58.579454 systemd-networkd[1394]: lxc123987730be8: Gained IPv6LL Apr 17 23:25:58.674712 kubelet[2535]: E0417 23:25:58.674672 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:25:59.934578 containerd[1481]: time="2026-04-17T23:25:59.934503273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:59.934578 containerd[1481]: time="2026-04-17T23:25:59.934547795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:59.934578 containerd[1481]: time="2026-04-17T23:25:59.934556664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:59.936355 containerd[1481]: time="2026-04-17T23:25:59.936293915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:59.940047 containerd[1481]: time="2026-04-17T23:25:59.939784236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:25:59.940047 containerd[1481]: time="2026-04-17T23:25:59.939866276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:25:59.940047 containerd[1481]: time="2026-04-17T23:25:59.939878719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:59.940047 containerd[1481]: time="2026-04-17T23:25:59.939950854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:25:59.966535 systemd[1]: Started cri-containerd-41065c22e31a6a29ad294cf6a515583eb9bc096d3cc91619421c59f0478b07b0.scope - libcontainer container 41065c22e31a6a29ad294cf6a515583eb9bc096d3cc91619421c59f0478b07b0. Apr 17 23:25:59.969167 systemd[1]: Started cri-containerd-6151f907fedc1efcbb31a4d77e706adbeda3dcbafd8f045df583641fae9056dd.scope - libcontainer container 6151f907fedc1efcbb31a4d77e706adbeda3dcbafd8f045df583641fae9056dd. Apr 17 23:25:59.977910 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:25:59.980130 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:26:00.002975 containerd[1481]: time="2026-04-17T23:26:00.002948893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vp4,Uid:ee15ec71-0ca6-4b64-99af-93bd76c00973,Namespace:kube-system,Attempt:0,} returns sandbox id \"41065c22e31a6a29ad294cf6a515583eb9bc096d3cc91619421c59f0478b07b0\"" Apr 17 23:26:00.003828 kubelet[2535]: E0417 23:26:00.003809 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:00.009840 containerd[1481]: time="2026-04-17T23:26:00.009708658Z" level=info msg="CreateContainer within sandbox \"41065c22e31a6a29ad294cf6a515583eb9bc096d3cc91619421c59f0478b07b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:26:00.009840 containerd[1481]: time="2026-04-17T23:26:00.009798503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rpchm,Uid:43bbc33e-786c-488b-9351-c7e28217b580,Namespace:kube-system,Attempt:0,} returns sandbox id \"6151f907fedc1efcbb31a4d77e706adbeda3dcbafd8f045df583641fae9056dd\"" Apr 17 23:26:00.011032 kubelet[2535]: E0417 23:26:00.011011 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:00.015966 containerd[1481]: time="2026-04-17T23:26:00.015885779Z" level=info msg="CreateContainer within sandbox \"6151f907fedc1efcbb31a4d77e706adbeda3dcbafd8f045df583641fae9056dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:26:00.032229 containerd[1481]: time="2026-04-17T23:26:00.032172137Z" level=info msg="CreateContainer within sandbox \"41065c22e31a6a29ad294cf6a515583eb9bc096d3cc91619421c59f0478b07b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de5eba82a1b92b4fcfb0267d932310f7e4b062b0f5bafeef11ed9924eb6888d4\"" Apr 17 23:26:00.033319 containerd[1481]: time="2026-04-17T23:26:00.032707692Z" level=info msg="StartContainer for \"de5eba82a1b92b4fcfb0267d932310f7e4b062b0f5bafeef11ed9924eb6888d4\"" Apr 17 23:26:00.034097 containerd[1481]: time="2026-04-17T23:26:00.034069705Z" level=info msg="CreateContainer within sandbox \"6151f907fedc1efcbb31a4d77e706adbeda3dcbafd8f045df583641fae9056dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13b548ce98fd516b856ee3a3be0c1287c20b0069852cc348bc04d4dd037ccbfa\"" Apr 17 23:26:00.034435 containerd[1481]: time="2026-04-17T23:26:00.034415191Z" level=info msg="StartContainer for \"13b548ce98fd516b856ee3a3be0c1287c20b0069852cc348bc04d4dd037ccbfa\"" Apr 17 23:26:00.059630 systemd[1]: Started cri-containerd-13b548ce98fd516b856ee3a3be0c1287c20b0069852cc348bc04d4dd037ccbfa.scope - libcontainer container 13b548ce98fd516b856ee3a3be0c1287c20b0069852cc348bc04d4dd037ccbfa. Apr 17 23:26:00.060628 systemd[1]: Started cri-containerd-de5eba82a1b92b4fcfb0267d932310f7e4b062b0f5bafeef11ed9924eb6888d4.scope - libcontainer container de5eba82a1b92b4fcfb0267d932310f7e4b062b0f5bafeef11ed9924eb6888d4. Apr 17 23:26:00.086083 containerd[1481]: time="2026-04-17T23:26:00.086043356Z" level=info msg="StartContainer for \"13b548ce98fd516b856ee3a3be0c1287c20b0069852cc348bc04d4dd037ccbfa\" returns successfully" Apr 17 23:26:00.086605 containerd[1481]: time="2026-04-17T23:26:00.086043657Z" level=info msg="StartContainer for \"de5eba82a1b92b4fcfb0267d932310f7e4b062b0f5bafeef11ed9924eb6888d4\" returns successfully" Apr 17 23:26:00.693578 kubelet[2535]: E0417 23:26:00.693329 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:00.696182 kubelet[2535]: E0417 23:26:00.696123 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:00.703969 kubelet[2535]: I0417 23:26:00.703550 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rpchm" podStartSLOduration=24.703533677 podStartE2EDuration="24.703533677s" podCreationTimestamp="2026-04-17 23:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:26:00.702780509 +0000 UTC m=+29.182573284" watchObservedRunningTime="2026-04-17 23:26:00.703533677 +0000 UTC m=+29.183326439" Apr 17 23:26:00.723323 kubelet[2535]: I0417 23:26:00.723135 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-26vp4" podStartSLOduration=24.723116453 podStartE2EDuration="24.723116453s" podCreationTimestamp="2026-04-17 23:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:26:00.712343454 +0000 UTC m=+29.192136227" watchObservedRunningTime="2026-04-17 23:26:00.723116453 +0000 UTC m=+29.202909226" Apr 17 23:26:00.939912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999045501.mount: Deactivated successfully. Apr 17 23:26:01.696568 kubelet[2535]: E0417 23:26:01.696511 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:01.696990 kubelet[2535]: E0417 23:26:01.696603 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:02.698100 kubelet[2535]: E0417 23:26:02.698030 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:02.896643 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:55990.service - OpenSSH per-connection server daemon (10.0.0.1:55990). Apr 17 23:26:02.931893 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 55990 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:02.933430 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:02.940077 systemd-logind[1462]: New session 11 of user core. Apr 17 23:26:02.948971 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:26:03.157922 sshd[3963]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:03.160474 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:55990.service: Deactivated successfully. Apr 17 23:26:03.161764 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:26:03.162281 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:26:03.163247 systemd-logind[1462]: Removed session 11. Apr 17 23:26:08.168419 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:55996.service - OpenSSH per-connection server daemon (10.0.0.1:55996). Apr 17 23:26:08.202970 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 55996 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:08.204483 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:08.208826 systemd-logind[1462]: New session 12 of user core. Apr 17 23:26:08.224501 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:26:08.326196 sshd[3980]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:08.328839 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:55996.service: Deactivated successfully. Apr 17 23:26:08.330069 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:26:08.330573 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:26:08.331410 systemd-logind[1462]: Removed session 12. Apr 17 23:26:13.342331 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:33732.service - OpenSSH per-connection server daemon (10.0.0.1:33732). Apr 17 23:26:13.377166 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 33732 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:13.379041 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:13.382728 systemd-logind[1462]: New session 13 of user core. Apr 17 23:26:13.396437 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:26:13.496513 sshd[3997]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:13.508902 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:33732.service: Deactivated successfully. Apr 17 23:26:13.510608 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:26:13.512054 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:26:13.513425 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:33740.service - OpenSSH per-connection server daemon (10.0.0.1:33740). Apr 17 23:26:13.514521 systemd-logind[1462]: Removed session 13. Apr 17 23:26:13.564175 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 33740 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:13.565520 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:13.569066 systemd-logind[1462]: New session 14 of user core. Apr 17 23:26:13.576378 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:26:13.744413 sshd[4013]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:13.753374 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:33740.service: Deactivated successfully. Apr 17 23:26:13.755343 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:26:13.756431 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:26:13.766043 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:33742.service - OpenSSH per-connection server daemon (10.0.0.1:33742). Apr 17 23:26:13.768345 systemd-logind[1462]: Removed session 14. Apr 17 23:26:13.798089 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 33742 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:13.799295 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:13.803094 systemd-logind[1462]: New session 15 of user core. Apr 17 23:26:13.812399 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:26:13.936892 sshd[4026]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:13.939946 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:33742.service: Deactivated successfully. Apr 17 23:26:13.941192 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:26:13.941995 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:26:13.942902 systemd-logind[1462]: Removed session 15. Apr 17 23:26:18.952668 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Apr 17 23:26:18.985754 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:18.986802 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:18.990203 systemd-logind[1462]: New session 16 of user core. Apr 17 23:26:18.999390 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:26:19.096923 sshd[4040]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:19.099385 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:33752.service: Deactivated successfully. Apr 17 23:26:19.100583 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:26:19.101108 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:26:19.101799 systemd-logind[1462]: Removed session 16. Apr 17 23:26:24.107437 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:53882.service - OpenSSH per-connection server daemon (10.0.0.1:53882). Apr 17 23:26:24.140289 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 53882 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:24.141597 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:24.144969 systemd-logind[1462]: New session 17 of user core. Apr 17 23:26:24.154395 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:26:24.253882 sshd[4054]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:24.265574 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:53882.service: Deactivated successfully. Apr 17 23:26:24.266768 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:26:24.267840 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:26:24.268797 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:53898.service - OpenSSH per-connection server daemon (10.0.0.1:53898). Apr 17 23:26:24.269424 systemd-logind[1462]: Removed session 17. Apr 17 23:26:24.301324 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 53898 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:24.302649 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:24.305841 systemd-logind[1462]: New session 18 of user core. Apr 17 23:26:24.315499 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:26:24.482471 sshd[4068]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:24.493371 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:53898.service: Deactivated successfully. Apr 17 23:26:24.494543 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:26:24.495712 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:26:24.496950 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:53914.service - OpenSSH per-connection server daemon (10.0.0.1:53914). Apr 17 23:26:24.497625 systemd-logind[1462]: Removed session 18. Apr 17 23:26:24.533436 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 53914 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:24.534739 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:24.538378 systemd-logind[1462]: New session 19 of user core. Apr 17 23:26:24.550532 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:26:24.926301 sshd[4081]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:24.932459 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:53914.service: Deactivated successfully. Apr 17 23:26:24.933682 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:26:24.935179 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:26:24.941685 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:53928.service - OpenSSH per-connection server daemon (10.0.0.1:53928). Apr 17 23:26:24.942782 systemd-logind[1462]: Removed session 19. Apr 17 23:26:24.971891 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 53928 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:24.973054 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:24.976841 systemd-logind[1462]: New session 20 of user core. Apr 17 23:26:24.986776 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:26:25.196690 sshd[4102]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:25.204244 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:53928.service: Deactivated successfully. Apr 17 23:26:25.205715 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:26:25.207001 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:26:25.219466 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:53932.service - OpenSSH per-connection server daemon (10.0.0.1:53932). Apr 17 23:26:25.220594 systemd-logind[1462]: Removed session 20. Apr 17 23:26:25.255537 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 53932 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:25.257066 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:25.261037 systemd-logind[1462]: New session 21 of user core. Apr 17 23:26:25.266480 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:26:25.366190 sshd[4114]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:25.368607 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:53932.service: Deactivated successfully. Apr 17 23:26:25.369913 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:26:25.370369 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:26:25.371113 systemd-logind[1462]: Removed session 21. Apr 17 23:26:30.383555 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:44710.service - OpenSSH per-connection server daemon (10.0.0.1:44710). Apr 17 23:26:30.418833 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 44710 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:30.420142 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:30.424814 systemd-logind[1462]: New session 22 of user core. Apr 17 23:26:30.440589 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:26:30.549575 sshd[4130]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:30.552378 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:44710.service: Deactivated successfully. Apr 17 23:26:30.553691 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:26:30.554238 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:26:30.555385 systemd-logind[1462]: Removed session 22. Apr 17 23:26:35.566582 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:44722.service - OpenSSH per-connection server daemon (10.0.0.1:44722). Apr 17 23:26:35.608461 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 44722 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:35.609915 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:35.614011 systemd-logind[1462]: New session 23 of user core. Apr 17 23:26:35.626797 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:26:35.743550 sshd[4147]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:35.751960 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:44722.service: Deactivated successfully. Apr 17 23:26:35.753337 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:26:35.754441 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:26:35.764533 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:44732.service - OpenSSH per-connection server daemon (10.0.0.1:44732). Apr 17 23:26:35.765587 systemd-logind[1462]: Removed session 23. Apr 17 23:26:35.796338 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 44732 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:35.797079 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:35.800494 systemd-logind[1462]: New session 24 of user core. Apr 17 23:26:35.811425 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:26:37.157081 containerd[1481]: time="2026-04-17T23:26:37.156971766Z" level=info msg="StopContainer for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" with timeout 30 (s)" Apr 17 23:26:37.157674 containerd[1481]: time="2026-04-17T23:26:37.157613546Z" level=info msg="Stop container \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" with signal terminated" Apr 17 23:26:37.162987 systemd[1]: run-containerd-runc-k8s.io-8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc-runc.U5uHNg.mount: Deactivated successfully. Apr 17 23:26:37.176608 systemd[1]: cri-containerd-10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142.scope: Deactivated successfully. Apr 17 23:26:37.179668 containerd[1481]: time="2026-04-17T23:26:37.179592021Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:26:37.183531 containerd[1481]: time="2026-04-17T23:26:37.183513422Z" level=info msg="StopContainer for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" with timeout 2 (s)" Apr 17 23:26:37.183852 containerd[1481]: time="2026-04-17T23:26:37.183829959Z" level=info msg="Stop container \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" with signal terminated" Apr 17 23:26:37.194089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142-rootfs.mount: Deactivated successfully. Apr 17 23:26:37.195526 systemd-networkd[1394]: lxc_health: Link DOWN Apr 17 23:26:37.195540 systemd-networkd[1394]: lxc_health: Lost carrier Apr 17 23:26:37.205837 containerd[1481]: time="2026-04-17T23:26:37.205746800Z" level=info msg="shim disconnected" id=10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142 namespace=k8s.io Apr 17 23:26:37.205837 containerd[1481]: time="2026-04-17T23:26:37.205803046Z" level=warning msg="cleaning up after shim disconnected" id=10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142 namespace=k8s.io Apr 17 23:26:37.205968 containerd[1481]: time="2026-04-17T23:26:37.205829074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:37.217720 systemd[1]: cri-containerd-8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc.scope: Deactivated successfully. Apr 17 23:26:37.218002 systemd[1]: cri-containerd-8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc.scope: Consumed 5.612s CPU time. Apr 17 23:26:37.223712 containerd[1481]: time="2026-04-17T23:26:37.223660605Z" level=info msg="StopContainer for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" returns successfully" Apr 17 23:26:37.227720 containerd[1481]: time="2026-04-17T23:26:37.227658624Z" level=info msg="StopPodSandbox for \"b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b\"" Apr 17 23:26:37.227720 containerd[1481]: time="2026-04-17T23:26:37.227708530Z" level=info msg="Container to stop \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:26:37.229165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b-shm.mount: Deactivated successfully. Apr 17 23:26:37.233064 systemd[1]: cri-containerd-b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b.scope: Deactivated successfully. Apr 17 23:26:37.238517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc-rootfs.mount: Deactivated successfully. Apr 17 23:26:37.249886 containerd[1481]: time="2026-04-17T23:26:37.249789676Z" level=info msg="shim disconnected" id=8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc namespace=k8s.io Apr 17 23:26:37.249886 containerd[1481]: time="2026-04-17T23:26:37.249884834Z" level=warning msg="cleaning up after shim disconnected" id=8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc namespace=k8s.io Apr 17 23:26:37.249886 containerd[1481]: time="2026-04-17T23:26:37.249896672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:37.257161 containerd[1481]: time="2026-04-17T23:26:37.257082440Z" level=info msg="shim disconnected" id=b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b namespace=k8s.io Apr 17 23:26:37.257161 containerd[1481]: time="2026-04-17T23:26:37.257151941Z" level=warning msg="cleaning up after shim disconnected" id=b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b namespace=k8s.io Apr 17 23:26:37.257161 containerd[1481]: time="2026-04-17T23:26:37.257161884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:37.264940 containerd[1481]: time="2026-04-17T23:26:37.264868445Z" level=info msg="StopContainer for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" returns successfully" Apr 17 23:26:37.265630 containerd[1481]: time="2026-04-17T23:26:37.265293824Z" level=info msg="StopPodSandbox for \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\"" Apr 17 23:26:37.265630 containerd[1481]: time="2026-04-17T23:26:37.265326838Z" level=info msg="Container to stop \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:26:37.265630 containerd[1481]: time="2026-04-17T23:26:37.265336637Z" level=info msg="Container to stop \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:26:37.265630 containerd[1481]: time="2026-04-17T23:26:37.265343592Z" level=info msg="Container to stop \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:26:37.265630 containerd[1481]: time="2026-04-17T23:26:37.265351084Z" level=info msg="Container to stop \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:26:37.265630 containerd[1481]: time="2026-04-17T23:26:37.265357728Z" level=info msg="Container to stop \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:26:37.270692 systemd[1]: cri-containerd-4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43.scope: Deactivated successfully. Apr 17 23:26:37.280521 containerd[1481]: time="2026-04-17T23:26:37.280474945Z" level=info msg="TearDown network for sandbox \"b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b\" successfully" Apr 17 23:26:37.280521 containerd[1481]: time="2026-04-17T23:26:37.280509175Z" level=info msg="StopPodSandbox for \"b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b\" returns successfully" Apr 17 23:26:37.289835 containerd[1481]: time="2026-04-17T23:26:37.289774879Z" level=info msg="shim disconnected" id=4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43 namespace=k8s.io Apr 17 23:26:37.289835 containerd[1481]: time="2026-04-17T23:26:37.289832713Z" level=warning msg="cleaning up after shim disconnected" id=4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43 namespace=k8s.io Apr 17 23:26:37.289835 containerd[1481]: time="2026-04-17T23:26:37.289840114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:37.303550 containerd[1481]: time="2026-04-17T23:26:37.303511316Z" level=info msg="TearDown network for sandbox \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" successfully" Apr 17 23:26:37.303550 containerd[1481]: time="2026-04-17T23:26:37.303542737Z" level=info msg="StopPodSandbox for \"4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43\" returns successfully" Apr 17 23:26:37.376472 kubelet[2535]: I0417 23:26:37.376385 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-run\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.376472 kubelet[2535]: I0417 23:26:37.376440 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-etc-cni-netd\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.376472 kubelet[2535]: I0417 23:26:37.376465 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cni-path\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.376472 kubelet[2535]: I0417 23:26:37.376482 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-lib-modules\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.377021 kubelet[2535]: I0417 23:26:37.376510 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-cilium-config-path\") pod \"175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7\" (UID: \"175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7\") " Apr 17 23:26:37.377021 kubelet[2535]: I0417 23:26:37.376533 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6twr\" (UniqueName: \"kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-kube-api-access-l6twr\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.377021 kubelet[2535]: I0417 23:26:37.376549 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q28zg\" (UniqueName: \"kubernetes.io/projected/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-kube-api-access-q28zg\") pod \"175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7\" (UID: \"175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7\") " Apr 17 23:26:37.377021 kubelet[2535]: I0417 23:26:37.376565 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-hubble-tls\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.377021 kubelet[2535]: I0417 23:26:37.376578 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-kernel\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.377021 kubelet[2535]: I0417 23:26:37.376598 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-net\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.377277 kubelet[2535]: I0417 23:26:37.376565 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.377277 kubelet[2535]: I0417 23:26:37.376566 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.377277 kubelet[2535]: I0417 23:26:37.376617 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.377277 kubelet[2535]: I0417 23:26:37.376622 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.377277 kubelet[2535]: I0417 23:26:37.376630 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.377550 kubelet[2535]: I0417 23:26:37.376641 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.379947 kubelet[2535]: I0417 23:26:37.379879 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-hostproc\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.379947 kubelet[2535]: I0417 23:26:37.379920 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a75f119e-5188-4013-ac5c-55bcd5b130b6-clustermesh-secrets\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.379947 kubelet[2535]: I0417 23:26:37.379936 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-cgroup\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.379947 kubelet[2535]: I0417 23:26:37.379929 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7" (UID: "175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:26:37.379947 kubelet[2535]: I0417 23:26:37.379955 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-config-path\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.379968 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-bpf-maps\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.379980 2535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-xtables-lock\") pod \"a75f119e-5188-4013-ac5c-55bcd5b130b6\" (UID: \"a75f119e-5188-4013-ac5c-55bcd5b130b6\") " Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.380020 2535 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.380028 2535 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.380036 2535 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.380042 2535 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.380049 2535 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380102 kubelet[2535]: I0417 23:26:37.380055 2535 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380278 kubelet[2535]: I0417 23:26:37.380062 2535 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.380278 kubelet[2535]: I0417 23:26:37.380081 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.380278 kubelet[2535]: I0417 23:26:37.380100 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.380429 kubelet[2535]: I0417 23:26:37.380405 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.381046 kubelet[2535]: I0417 23:26:37.380408 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:26:37.381115 kubelet[2535]: I0417 23:26:37.381087 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-kube-api-access-q28zg" (OuterVolumeSpecName: "kube-api-access-q28zg") pod "175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7" (UID: "175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7"). InnerVolumeSpecName "kube-api-access-q28zg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:26:37.381649 kubelet[2535]: I0417 23:26:37.381612 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:26:37.381685 kubelet[2535]: I0417 23:26:37.381675 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:26:37.382098 kubelet[2535]: I0417 23:26:37.382077 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75f119e-5188-4013-ac5c-55bcd5b130b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:26:37.382308 kubelet[2535]: I0417 23:26:37.382289 2535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-kube-api-access-l6twr" (OuterVolumeSpecName: "kube-api-access-l6twr") pod "a75f119e-5188-4013-ac5c-55bcd5b130b6" (UID: "a75f119e-5188-4013-ac5c-55bcd5b130b6"). InnerVolumeSpecName "kube-api-access-l6twr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:26:37.480897 kubelet[2535]: I0417 23:26:37.480743 2535 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q28zg\" (UniqueName: \"kubernetes.io/projected/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7-kube-api-access-q28zg\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.480897 kubelet[2535]: I0417 23:26:37.480787 2535 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.480897 kubelet[2535]: I0417 23:26:37.480796 2535 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.483243 kubelet[2535]: I0417 23:26:37.481140 2535 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a75f119e-5188-4013-ac5c-55bcd5b130b6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.483243 kubelet[2535]: I0417 23:26:37.481162 2535 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.483243 kubelet[2535]: I0417 23:26:37.481172 2535 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a75f119e-5188-4013-ac5c-55bcd5b130b6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.483243 kubelet[2535]: I0417 23:26:37.481180 2535 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.483243 kubelet[2535]: I0417 23:26:37.481194 2535 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a75f119e-5188-4013-ac5c-55bcd5b130b6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.483243 kubelet[2535]: I0417 23:26:37.481202 2535 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6twr\" (UniqueName: \"kubernetes.io/projected/a75f119e-5188-4013-ac5c-55bcd5b130b6-kube-api-access-l6twr\") on node \"localhost\" DevicePath \"\"" Apr 17 23:26:37.596008 systemd[1]: Removed slice kubepods-burstable-poda75f119e_5188_4013_ac5c_55bcd5b130b6.slice - libcontainer container kubepods-burstable-poda75f119e_5188_4013_ac5c_55bcd5b130b6.slice. Apr 17 23:26:37.596087 systemd[1]: kubepods-burstable-poda75f119e_5188_4013_ac5c_55bcd5b130b6.slice: Consumed 5.681s CPU time. Apr 17 23:26:37.596902 systemd[1]: Removed slice kubepods-besteffort-pod175ed8f4_6d81_4c39_b9f1_2ff3b73ffea7.slice - libcontainer container kubepods-besteffort-pod175ed8f4_6d81_4c39_b9f1_2ff3b73ffea7.slice. Apr 17 23:26:37.769571 kubelet[2535]: I0417 23:26:37.769442 2535 scope.go:117] "RemoveContainer" containerID="10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142" Apr 17 23:26:37.770615 containerd[1481]: time="2026-04-17T23:26:37.770584739Z" level=info msg="RemoveContainer for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\"" Apr 17 23:26:37.774512 containerd[1481]: time="2026-04-17T23:26:37.774479119Z" level=info msg="RemoveContainer for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" returns successfully" Apr 17 23:26:37.775049 kubelet[2535]: I0417 23:26:37.774923 2535 scope.go:117] "RemoveContainer" containerID="10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142" Apr 17 23:26:37.777801 containerd[1481]: time="2026-04-17T23:26:37.777717916Z" level=error msg="ContainerStatus for \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\": not found" Apr 17 23:26:37.784854 kubelet[2535]: E0417 23:26:37.784786 2535 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\": not found" containerID="10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142" Apr 17 23:26:37.784854 kubelet[2535]: I0417 23:26:37.784850 2535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142"} err="failed to get container status \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\": rpc error: code = NotFound desc = an error occurred when try to find container \"10001bef6f0e99710756aed2f5e33544dec2533b115b83794e0377b8ef384142\": not found" Apr 17 23:26:37.785267 kubelet[2535]: I0417 23:26:37.784891 2535 scope.go:117] "RemoveContainer" containerID="8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc" Apr 17 23:26:37.787011 containerd[1481]: time="2026-04-17T23:26:37.786746229Z" level=info msg="RemoveContainer for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\"" Apr 17 23:26:37.791013 containerd[1481]: time="2026-04-17T23:26:37.790971730Z" level=info msg="RemoveContainer for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" returns successfully" Apr 17 23:26:37.791181 kubelet[2535]: I0417 23:26:37.791155 2535 scope.go:117] "RemoveContainer" containerID="ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414" Apr 17 23:26:37.792059 containerd[1481]: time="2026-04-17T23:26:37.791996057Z" level=info msg="RemoveContainer for \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\"" Apr 17 23:26:37.795293 containerd[1481]: time="2026-04-17T23:26:37.795185353Z" level=info msg="RemoveContainer for \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\" returns successfully" Apr 17 23:26:37.795544 kubelet[2535]: I0417 23:26:37.795496 2535 scope.go:117] "RemoveContainer" containerID="ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1" Apr 17 23:26:37.796532 containerd[1481]: time="2026-04-17T23:26:37.796496452Z" level=info msg="RemoveContainer for \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\"" Apr 17 23:26:37.799960 containerd[1481]: time="2026-04-17T23:26:37.799925292Z" level=info msg="RemoveContainer for \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\" returns successfully" Apr 17 23:26:37.800115 kubelet[2535]: I0417 23:26:37.800081 2535 scope.go:117] "RemoveContainer" containerID="20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb" Apr 17 23:26:37.801080 containerd[1481]: time="2026-04-17T23:26:37.801056300Z" level=info msg="RemoveContainer for \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\"" Apr 17 23:26:37.803865 containerd[1481]: time="2026-04-17T23:26:37.803802731Z" level=info msg="RemoveContainer for \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\" returns successfully" Apr 17 23:26:37.804098 kubelet[2535]: I0417 23:26:37.804071 2535 scope.go:117] "RemoveContainer" containerID="2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158" Apr 17 23:26:37.805079 containerd[1481]: time="2026-04-17T23:26:37.805049501Z" level=info msg="RemoveContainer for \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\"" Apr 17 23:26:37.807411 containerd[1481]: time="2026-04-17T23:26:37.807376493Z" level=info msg="RemoveContainer for \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\" returns successfully" Apr 17 23:26:37.807559 kubelet[2535]: I0417 23:26:37.807522 2535 scope.go:117] "RemoveContainer" containerID="8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc" Apr 17 23:26:37.807732 containerd[1481]: time="2026-04-17T23:26:37.807685698Z" level=error msg="ContainerStatus for \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\": not found" Apr 17 23:26:37.807922 kubelet[2535]: E0417 23:26:37.807827 2535 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\": not found" containerID="8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc" Apr 17 23:26:37.807922 kubelet[2535]: I0417 23:26:37.807855 2535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc"} err="failed to get container status \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c175d28a8ae6a0c03b290935bbb3c353d7060e4da7fea037a46840530cfc1cc\": not found" Apr 17 23:26:37.807922 kubelet[2535]: I0417 23:26:37.807872 2535 scope.go:117] "RemoveContainer" containerID="ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414" Apr 17 23:26:37.808094 containerd[1481]: time="2026-04-17T23:26:37.808055475Z" level=error msg="ContainerStatus for \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\": not found" Apr 17 23:26:37.808178 kubelet[2535]: E0417 23:26:37.808157 2535 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\": not found" containerID="ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414" Apr 17 23:26:37.808197 kubelet[2535]: I0417 23:26:37.808185 2535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414"} err="failed to get container status \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba14ca9c79c9995c87baab93aa17778e01d43195f2a3baafc7d6c383a21f5414\": not found" Apr 17 23:26:37.808237 kubelet[2535]: I0417 23:26:37.808200 2535 scope.go:117] "RemoveContainer" containerID="ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1" Apr 17 23:26:37.808436 containerd[1481]: time="2026-04-17T23:26:37.808405235Z" level=error msg="ContainerStatus for \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\": not found" Apr 17 23:26:37.808541 kubelet[2535]: E0417 23:26:37.808525 2535 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\": not found" containerID="ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1" Apr 17 23:26:37.808583 kubelet[2535]: I0417 23:26:37.808544 2535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1"} err="failed to get container status \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca92bce5ce4fada121847df0c562f10266a6f7f62d828f12e48c836ce38458d1\": not found" Apr 17 23:26:37.808583 kubelet[2535]: I0417 23:26:37.808554 2535 scope.go:117] "RemoveContainer" containerID="20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb" Apr 17 23:26:37.808772 containerd[1481]: time="2026-04-17T23:26:37.808739752Z" level=error msg="ContainerStatus for \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\": not found" Apr 17 23:26:37.808892 kubelet[2535]: E0417 23:26:37.808871 2535 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\": not found" containerID="20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb" Apr 17 23:26:37.808935 kubelet[2535]: I0417 23:26:37.808896 2535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb"} err="failed to get container status \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"20b9670854d2a083f47a182b88c3445ec051f7966fc5fba943d1b03d57a1a4fb\": not found" Apr 17 23:26:37.808935 kubelet[2535]: I0417 23:26:37.808907 2535 scope.go:117] "RemoveContainer" containerID="2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158" Apr 17 23:26:37.809110 containerd[1481]: time="2026-04-17T23:26:37.809071981Z" level=error msg="ContainerStatus for \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\": not found" Apr 17 23:26:37.809247 kubelet[2535]: E0417 23:26:37.809188 2535 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\": not found" containerID="2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158" Apr 17 23:26:37.809247 kubelet[2535]: I0417 23:26:37.809236 2535 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158"} err="failed to get container status \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\": rpc error: code = NotFound desc = an error occurred when try to find container \"2740267a581fee608951903478c7aa3e1c89832f4fe35047ad1d119989e0b158\": not found" Apr 17 23:26:38.158709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b56ddd5ea8ca504999bf83be80c56075b2bcfe6615184645e119b4379fa4624b-rootfs.mount: Deactivated successfully. Apr 17 23:26:38.158808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43-rootfs.mount: Deactivated successfully. Apr 17 23:26:38.158873 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4be52f315402e12282c2504e9c8ccc5714e13048a1df75837ea727d4843f5f43-shm.mount: Deactivated successfully. Apr 17 23:26:38.158918 systemd[1]: var-lib-kubelet-pods-175ed8f4\x2d6d81\x2d4c39\x2db9f1\x2d2ff3b73ffea7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq28zg.mount: Deactivated successfully. Apr 17 23:26:38.158961 systemd[1]: var-lib-kubelet-pods-a75f119e\x2d5188\x2d4013\x2dac5c\x2d55bcd5b130b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl6twr.mount: Deactivated successfully. Apr 17 23:26:38.158998 systemd[1]: var-lib-kubelet-pods-a75f119e\x2d5188\x2d4013\x2dac5c\x2d55bcd5b130b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:26:38.159036 systemd[1]: var-lib-kubelet-pods-a75f119e\x2d5188\x2d4013\x2dac5c\x2d55bcd5b130b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:26:39.103254 sshd[4161]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:39.113473 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:44732.service: Deactivated successfully. Apr 17 23:26:39.115051 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:26:39.116386 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:26:39.117782 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:44740.service - OpenSSH per-connection server daemon (10.0.0.1:44740). Apr 17 23:26:39.118598 systemd-logind[1462]: Removed session 24. Apr 17 23:26:39.154983 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 44740 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:39.156079 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:39.160067 systemd-logind[1462]: New session 25 of user core. Apr 17 23:26:39.172399 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:26:39.593034 sshd[4328]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:39.596387 kubelet[2535]: I0417 23:26:39.595924 2535 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7" path="/var/lib/kubelet/pods/175ed8f4-6d81-4c39-b9f1-2ff3b73ffea7/volumes" Apr 17 23:26:39.596387 kubelet[2535]: I0417 23:26:39.596209 2535 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a75f119e-5188-4013-ac5c-55bcd5b130b6" path="/var/lib/kubelet/pods/a75f119e-5188-4013-ac5c-55bcd5b130b6/volumes" Apr 17 23:26:39.599057 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:44740.service: Deactivated successfully. Apr 17 23:26:39.600612 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:26:39.602793 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:26:39.611728 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:53650.service - OpenSSH per-connection server daemon (10.0.0.1:53650). Apr 17 23:26:39.616971 systemd-logind[1462]: Removed session 25. Apr 17 23:26:39.628261 systemd[1]: Created slice kubepods-burstable-podbbbea3b0_c342_4356_8833_a4a57888d303.slice - libcontainer container kubepods-burstable-podbbbea3b0_c342_4356_8833_a4a57888d303.slice. Apr 17 23:26:39.657294 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 53650 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:39.658011 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:39.661416 systemd-logind[1462]: New session 26 of user core. Apr 17 23:26:39.676423 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 17 23:26:39.697555 kubelet[2535]: I0417 23:26:39.697434 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-cilium-cgroup\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697555 kubelet[2535]: I0417 23:26:39.697492 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-xtables-lock\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697555 kubelet[2535]: I0417 23:26:39.697533 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbbea3b0-c342-4356-8833-a4a57888d303-clustermesh-secrets\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697845 kubelet[2535]: I0417 23:26:39.697616 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-host-proc-sys-kernel\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697845 kubelet[2535]: I0417 23:26:39.697657 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-cni-path\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697845 kubelet[2535]: I0417 23:26:39.697685 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbbea3b0-c342-4356-8833-a4a57888d303-cilium-ipsec-secrets\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697845 kubelet[2535]: I0417 23:26:39.697702 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-cilium-run\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697845 kubelet[2535]: I0417 23:26:39.697716 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-hostproc\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.697845 kubelet[2535]: I0417 23:26:39.697728 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqvj7\" (UniqueName: \"kubernetes.io/projected/bbbea3b0-c342-4356-8833-a4a57888d303-kube-api-access-wqvj7\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.698026 kubelet[2535]: I0417 23:26:39.697754 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbbea3b0-c342-4356-8833-a4a57888d303-hubble-tls\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.698026 kubelet[2535]: I0417 23:26:39.697777 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-bpf-maps\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.698026 kubelet[2535]: I0417 23:26:39.697789 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-etc-cni-netd\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.698026 kubelet[2535]: I0417 23:26:39.697808 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-lib-modules\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.698026 kubelet[2535]: I0417 23:26:39.697855 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbbea3b0-c342-4356-8833-a4a57888d303-cilium-config-path\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.698026 kubelet[2535]: I0417 23:26:39.697870 2535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbbea3b0-c342-4356-8833-a4a57888d303-host-proc-sys-net\") pod \"cilium-fmc79\" (UID: \"bbbea3b0-c342-4356-8833-a4a57888d303\") " pod="kube-system/cilium-fmc79" Apr 17 23:26:39.728945 sshd[4341]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:39.737889 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:53650.service: Deactivated successfully. Apr 17 23:26:39.739520 systemd[1]: session-26.scope: Deactivated successfully. Apr 17 23:26:39.740618 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Apr 17 23:26:39.749524 systemd[1]: Started sshd@26-10.0.0.7:22-10.0.0.1:53656.service - OpenSSH per-connection server daemon (10.0.0.1:53656). Apr 17 23:26:39.750371 systemd-logind[1462]: Removed session 26. Apr 17 23:26:39.785360 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 53656 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:26:39.786657 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:26:39.790444 systemd-logind[1462]: New session 27 of user core. Apr 17 23:26:39.796422 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 17 23:26:39.935366 kubelet[2535]: E0417 23:26:39.935187 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:39.936075 containerd[1481]: time="2026-04-17T23:26:39.935850189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fmc79,Uid:bbbea3b0-c342-4356-8833-a4a57888d303,Namespace:kube-system,Attempt:0,}" Apr 17 23:26:39.961167 containerd[1481]: time="2026-04-17T23:26:39.960687926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:26:39.961167 containerd[1481]: time="2026-04-17T23:26:39.960866459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:26:39.961167 containerd[1481]: time="2026-04-17T23:26:39.960884816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:26:39.961167 containerd[1481]: time="2026-04-17T23:26:39.961077704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:26:39.985523 systemd[1]: Started cri-containerd-0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428.scope - libcontainer container 0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428. Apr 17 23:26:40.007725 containerd[1481]: time="2026-04-17T23:26:40.007696256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fmc79,Uid:bbbea3b0-c342-4356-8833-a4a57888d303,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\"" Apr 17 23:26:40.008569 kubelet[2535]: E0417 23:26:40.008535 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:40.013784 containerd[1481]: time="2026-04-17T23:26:40.013749627Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:26:40.023191 containerd[1481]: time="2026-04-17T23:26:40.023137230Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6\"" Apr 17 23:26:40.023780 containerd[1481]: time="2026-04-17T23:26:40.023652637Z" level=info msg="StartContainer for \"9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6\"" Apr 17 23:26:40.045399 systemd[1]: Started cri-containerd-9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6.scope - libcontainer container 9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6. Apr 17 23:26:40.065921 containerd[1481]: time="2026-04-17T23:26:40.065862589Z" level=info msg="StartContainer for \"9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6\" returns successfully" Apr 17 23:26:40.074061 systemd[1]: cri-containerd-9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6.scope: Deactivated successfully. Apr 17 23:26:40.098259 containerd[1481]: time="2026-04-17T23:26:40.098164494Z" level=info msg="shim disconnected" id=9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6 namespace=k8s.io Apr 17 23:26:40.098259 containerd[1481]: time="2026-04-17T23:26:40.098240378Z" level=warning msg="cleaning up after shim disconnected" id=9ca69bb563cb8defecc497a1c61371aec9a5bd64c19ca44e85c5eb97ea8c02e6 namespace=k8s.io Apr 17 23:26:40.098259 containerd[1481]: time="2026-04-17T23:26:40.098249688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:40.781891 kubelet[2535]: E0417 23:26:40.781804 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:40.786402 containerd[1481]: time="2026-04-17T23:26:40.786290985Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:26:40.800015 containerd[1481]: time="2026-04-17T23:26:40.799957157Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0\"" Apr 17 23:26:40.800831 containerd[1481]: time="2026-04-17T23:26:40.800449304Z" level=info msg="StartContainer for \"ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0\"" Apr 17 23:26:40.824438 systemd[1]: Started cri-containerd-ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0.scope - libcontainer container ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0. Apr 17 23:26:40.843606 containerd[1481]: time="2026-04-17T23:26:40.843568261Z" level=info msg="StartContainer for \"ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0\" returns successfully" Apr 17 23:26:40.847801 systemd[1]: cri-containerd-ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0.scope: Deactivated successfully. Apr 17 23:26:40.860799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0-rootfs.mount: Deactivated successfully. Apr 17 23:26:40.865350 containerd[1481]: time="2026-04-17T23:26:40.865275852Z" level=info msg="shim disconnected" id=ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0 namespace=k8s.io Apr 17 23:26:40.865350 containerd[1481]: time="2026-04-17T23:26:40.865335793Z" level=warning msg="cleaning up after shim disconnected" id=ea08c1960bed8b0fd311318467321b85b16c990e2fabda4326ce535766e2b9d0 namespace=k8s.io Apr 17 23:26:40.865350 containerd[1481]: time="2026-04-17T23:26:40.865344281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:41.628100 kubelet[2535]: E0417 23:26:41.628045 2535 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:26:41.786155 kubelet[2535]: E0417 23:26:41.786114 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:41.790805 containerd[1481]: time="2026-04-17T23:26:41.790753520Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:26:41.805369 containerd[1481]: time="2026-04-17T23:26:41.805322404Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89\"" Apr 17 23:26:41.806198 containerd[1481]: time="2026-04-17T23:26:41.805896234Z" level=info msg="StartContainer for \"670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89\"" Apr 17 23:26:41.839542 systemd[1]: Started cri-containerd-670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89.scope - libcontainer container 670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89. Apr 17 23:26:41.869095 systemd[1]: cri-containerd-670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89.scope: Deactivated successfully. Apr 17 23:26:41.869837 containerd[1481]: time="2026-04-17T23:26:41.869158459Z" level=info msg="StartContainer for \"670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89\" returns successfully" Apr 17 23:26:41.883321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89-rootfs.mount: Deactivated successfully. Apr 17 23:26:41.887885 containerd[1481]: time="2026-04-17T23:26:41.887833030Z" level=info msg="shim disconnected" id=670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89 namespace=k8s.io Apr 17 23:26:41.887958 containerd[1481]: time="2026-04-17T23:26:41.887885914Z" level=warning msg="cleaning up after shim disconnected" id=670a859f4838938c6e3b9d77a462a10412784819b0d11e45a82b4b9635ce9e89 namespace=k8s.io Apr 17 23:26:41.887958 containerd[1481]: time="2026-04-17T23:26:41.887897648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:42.789783 kubelet[2535]: E0417 23:26:42.789730 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:42.793855 containerd[1481]: time="2026-04-17T23:26:42.793801051Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:26:42.804925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782675520.mount: Deactivated successfully. Apr 17 23:26:42.805350 containerd[1481]: time="2026-04-17T23:26:42.805317413Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a\"" Apr 17 23:26:42.813662 containerd[1481]: time="2026-04-17T23:26:42.813624158Z" level=info msg="StartContainer for \"05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a\"" Apr 17 23:26:42.847410 systemd[1]: Started cri-containerd-05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a.scope - libcontainer container 05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a. Apr 17 23:26:42.863676 systemd[1]: cri-containerd-05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a.scope: Deactivated successfully. Apr 17 23:26:42.866722 containerd[1481]: time="2026-04-17T23:26:42.866667452Z" level=info msg="StartContainer for \"05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a\" returns successfully" Apr 17 23:26:42.880921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a-rootfs.mount: Deactivated successfully. Apr 17 23:26:42.887883 containerd[1481]: time="2026-04-17T23:26:42.887822868Z" level=info msg="shim disconnected" id=05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a namespace=k8s.io Apr 17 23:26:42.887883 containerd[1481]: time="2026-04-17T23:26:42.887878040Z" level=warning msg="cleaning up after shim disconnected" id=05cf5055e4117b27f3c74e297398ab150e5230f3a60ec3fcc0ed4da2af714d8a namespace=k8s.io Apr 17 23:26:42.887883 containerd[1481]: time="2026-04-17T23:26:42.887888703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:26:43.193597 kubelet[2535]: I0417 23:26:43.193539 2535 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:26:43Z","lastTransitionTime":"2026-04-17T23:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:26:43.795016 kubelet[2535]: E0417 23:26:43.794950 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:43.799599 containerd[1481]: time="2026-04-17T23:26:43.799484729Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:26:43.816727 containerd[1481]: time="2026-04-17T23:26:43.816641950Z" level=info msg="CreateContainer within sandbox \"0dba1c4666c92ec79de624c6f5f6cdcc6d420f086350e207b5a48d6768e47428\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858\"" Apr 17 23:26:43.817454 containerd[1481]: time="2026-04-17T23:26:43.817383473Z" level=info msg="StartContainer for \"bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858\"" Apr 17 23:26:43.850538 systemd[1]: Started cri-containerd-bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858.scope - libcontainer container bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858. Apr 17 23:26:43.872021 containerd[1481]: time="2026-04-17T23:26:43.871948190Z" level=info msg="StartContainer for \"bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858\" returns successfully" Apr 17 23:26:44.106266 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:26:44.801267 kubelet[2535]: E0417 23:26:44.799579 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:44.810550 systemd[1]: run-containerd-runc-k8s.io-bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858-runc.KBZdcG.mount: Deactivated successfully. Apr 17 23:26:45.936267 kubelet[2535]: E0417 23:26:45.936181 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:46.018502 systemd[1]: run-containerd-runc-k8s.io-bdaf25f7642f0bcd7684af2d11522a20a3fb7e8f4be83f0f253d1baf9dd18858-runc.IrDLHq.mount: Deactivated successfully. Apr 17 23:26:46.904064 systemd-networkd[1394]: lxc_health: Link UP Apr 17 23:26:46.913271 systemd-networkd[1394]: lxc_health: Gained carrier Apr 17 23:26:47.592026 kubelet[2535]: E0417 23:26:47.591640 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:47.936713 kubelet[2535]: E0417 23:26:47.936589 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:47.952621 kubelet[2535]: I0417 23:26:47.952570 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fmc79" podStartSLOduration=8.952555851 podStartE2EDuration="8.952555851s" podCreationTimestamp="2026-04-17 23:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:26:44.82287019 +0000 UTC m=+73.302662962" watchObservedRunningTime="2026-04-17 23:26:47.952555851 +0000 UTC m=+76.432348624" Apr 17 23:26:48.756514 systemd-networkd[1394]: lxc_health: Gained IPv6LL Apr 17 23:26:48.806193 kubelet[2535]: E0417 23:26:48.806143 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:49.808731 kubelet[2535]: E0417 23:26:49.807594 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:52.591766 kubelet[2535]: E0417 23:26:52.591702 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:54.438054 sshd[4349]: pam_unix(sshd:session): session closed for user core Apr 17 23:26:54.440646 systemd[1]: sshd@26-10.0.0.7:22-10.0.0.1:53656.service: Deactivated successfully. Apr 17 23:26:54.441893 systemd[1]: session-27.scope: Deactivated successfully. Apr 17 23:26:54.442502 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Apr 17 23:26:54.443266 systemd-logind[1462]: Removed session 27. Apr 17 23:26:54.591946 kubelet[2535]: E0417 23:26:54.591892 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:26:54.591946 kubelet[2535]: E0417 23:26:54.591931 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"