Apr 21 10:24:15.852595 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:24:15.852612 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:24:15.852621 kernel: BIOS-provided physical RAM map: Apr 21 10:24:15.852627 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:24:15.852632 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 10:24:15.852637 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 10:24:15.852643 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 10:24:15.852648 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 10:24:15.852653 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 21 10:24:15.852658 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 21 10:24:15.852664 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 21 10:24:15.852669 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 21 10:24:15.852675 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 21 10:24:15.852680 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 21 10:24:15.852686 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 21 10:24:15.852692 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 10:24:15.852720 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 21 10:24:15.852724 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 21 10:24:15.852729 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 10:24:15.852733 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:24:15.852738 kernel: NX (Execute Disable) protection: active Apr 21 10:24:15.852742 kernel: APIC: Static calls initialized Apr 21 10:24:15.852747 kernel: efi: EFI v2.7 by EDK II Apr 21 10:24:15.852751 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 21 10:24:15.852756 kernel: SMBIOS 2.8 present. Apr 21 10:24:15.852760 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 21 10:24:15.852765 kernel: Hypervisor detected: KVM Apr 21 10:24:15.852771 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:24:15.852775 kernel: kvm-clock: using sched offset of 4607811143 cycles Apr 21 10:24:15.852780 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:24:15.852785 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:24:15.852790 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:24:15.852795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:24:15.852800 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 21 10:24:15.852804 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:24:15.852809 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:24:15.852815 kernel: Using GB pages for direct mapping Apr 21 10:24:15.852820 kernel: Secure boot disabled Apr 21 10:24:15.852824 kernel: ACPI: Early table checksum verification disabled Apr 21 10:24:15.852829 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 10:24:15.852837 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:24:15.852842 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:24:15.852847 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:24:15.852853 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 10:24:15.852858 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:24:15.852863 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:24:15.852868 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:24:15.852873 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:24:15.852878 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:24:15.852883 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 10:24:15.852889 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 10:24:15.852894 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 10:24:15.852898 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 10:24:15.852903 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 10:24:15.852908 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 10:24:15.852913 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 10:24:15.852918 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 10:24:15.852923 kernel: No NUMA configuration found Apr 21 10:24:15.852928 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 21 10:24:15.852933 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 21 10:24:15.852939 kernel: Zone ranges: Apr 21 10:24:15.852944 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:24:15.852949 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 21 10:24:15.852953 kernel: Normal empty Apr 21 10:24:15.852958 kernel: Movable zone start for each node Apr 21 10:24:15.852963 kernel: Early memory node ranges Apr 21 10:24:15.852968 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:24:15.852973 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 10:24:15.852978 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 10:24:15.852984 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 21 10:24:15.852989 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 21 10:24:15.852994 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 21 10:24:15.852998 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 21 10:24:15.853003 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:24:15.853031 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:24:15.853036 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 10:24:15.853041 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:24:15.853046 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 21 10:24:15.853051 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:24:15.853058 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 21 10:24:15.853063 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:24:15.853068 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:24:15.853073 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:24:15.853078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:24:15.853082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:24:15.853087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:24:15.853092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:24:15.853097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:24:15.853104 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:24:15.853109 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:24:15.853114 kernel: TSC deadline timer available Apr 21 10:24:15.853119 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:24:15.853124 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:24:15.853129 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:24:15.853133 kernel: kvm-guest: setup PV sched yield Apr 21 10:24:15.853138 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 21 10:24:15.853143 kernel: Booting paravirtualized kernel on KVM Apr 21 10:24:15.853150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:24:15.853155 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:24:15.853160 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:24:15.853165 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:24:15.853170 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:24:15.853175 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:24:15.853180 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:24:15.853185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:24:15.853192 kernel: random: crng init done Apr 21 10:24:15.853197 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:24:15.853202 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:24:15.853207 kernel: Fallback order for Node 0: 0 Apr 21 10:24:15.853212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 21 10:24:15.853217 kernel: Policy zone: DMA32 Apr 21 10:24:15.853221 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:24:15.853227 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 172120K reserved, 0K cma-reserved) Apr 21 10:24:15.853232 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:24:15.853238 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:24:15.853243 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:24:15.853248 kernel: Dynamic Preempt: voluntary Apr 21 10:24:15.853253 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:24:15.853263 kernel: rcu: RCU event tracing is enabled. Apr 21 10:24:15.853271 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:24:15.853276 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:24:15.853281 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:24:15.853287 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:24:15.853292 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:24:15.853297 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:24:15.853303 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:24:15.853310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:24:15.853315 kernel: Console: colour dummy device 80x25 Apr 21 10:24:15.853320 kernel: printk: console [ttyS0] enabled Apr 21 10:24:15.853326 kernel: ACPI: Core revision 20230628 Apr 21 10:24:15.853332 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:24:15.853339 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:24:15.853344 kernel: x2apic enabled Apr 21 10:24:15.853350 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:24:15.853355 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:24:15.853361 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:24:15.853366 kernel: kvm-guest: setup PV IPIs Apr 21 10:24:15.853372 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:24:15.853377 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:24:15.853383 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:24:15.853390 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:24:15.853395 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:24:15.853401 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:24:15.853406 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:24:15.853411 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:24:15.853417 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:24:15.853422 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:24:15.853428 kernel: RETBleed: Vulnerable Apr 21 10:24:15.853433 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:24:15.853441 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:24:15.853446 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:24:15.853452 kernel: active return thunk: its_return_thunk Apr 21 10:24:15.853457 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:24:15.853462 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:24:15.853468 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:24:15.853476 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:24:15.853484 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:24:15.853493 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:24:15.853504 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:24:15.853512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:24:15.853521 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:24:15.853531 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:24:15.853541 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:24:15.853550 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:24:15.853560 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:24:15.853569 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:24:15.853579 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:24:15.853588 kernel: landlock: Up and running. Apr 21 10:24:15.853594 kernel: SELinux: Initializing. Apr 21 10:24:15.853599 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:24:15.853604 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:24:15.853610 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:24:15.853615 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:24:15.853621 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:24:15.853626 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:24:15.853633 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:24:15.853638 kernel: signal: max sigframe size: 3632 Apr 21 10:24:15.853644 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:24:15.853649 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:24:15.853655 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:24:15.853660 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:24:15.853665 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:24:15.853671 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:24:15.853676 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:24:15.853681 kernel: smpboot: Max logical packages: 1 Apr 21 10:24:15.853688 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:24:15.853693 kernel: devtmpfs: initialized Apr 21 10:24:15.853716 kernel: x86/mm: Memory block size: 128MB Apr 21 10:24:15.853722 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 10:24:15.853727 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 10:24:15.853732 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 21 10:24:15.853738 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 10:24:15.853743 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 10:24:15.853750 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:24:15.853756 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:24:15.853762 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:24:15.853767 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:24:15.853773 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:24:15.853778 kernel: audit: type=2000 audit(1776767055.333:1): state=initialized audit_enabled=0 res=1 Apr 21 10:24:15.853783 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:24:15.853789 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:24:15.853794 kernel: cpuidle: using governor menu Apr 21 10:24:15.853801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:24:15.853806 kernel: dca service started, version 1.12.1 Apr 21 10:24:15.853812 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:24:15.853817 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:24:15.853823 kernel: PCI: Using configuration type 1 for base access Apr 21 10:24:15.853828 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:24:15.853833 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:24:15.853839 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:24:15.853844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:24:15.853851 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:24:15.853856 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:24:15.853862 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:24:15.853867 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:24:15.853873 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:24:15.853878 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:24:15.853883 kernel: ACPI: Interpreter enabled Apr 21 10:24:15.853889 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:24:15.853894 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:24:15.853901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:24:15.853907 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:24:15.853912 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:24:15.853918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:24:15.854052 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:24:15.854120 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:24:15.854175 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:24:15.854182 kernel: PCI host bridge to bus 0000:00 Apr 21 10:24:15.854243 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:24:15.854294 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:24:15.854344 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:24:15.854392 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:24:15.854441 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:24:15.854491 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 21 10:24:15.854542 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:24:15.854609 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:24:15.854669 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:24:15.854748 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 21 10:24:15.854804 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 21 10:24:15.854860 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:24:15.854914 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:24:15.854972 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:24:15.855062 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:24:15.855119 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 21 10:24:15.855174 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 21 10:24:15.855230 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 21 10:24:15.855290 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:24:15.855351 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 21 10:24:15.855410 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 21 10:24:15.855465 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 21 10:24:15.855526 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:24:15.855581 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 21 10:24:15.855637 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 21 10:24:15.855691 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 21 10:24:15.855769 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 21 10:24:15.855834 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:24:15.855891 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:24:15.855951 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:24:15.856031 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 21 10:24:15.856090 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 21 10:24:15.856150 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:24:15.856207 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 21 10:24:15.856214 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:24:15.856220 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:24:15.856225 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:24:15.856231 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:24:15.856236 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:24:15.856241 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:24:15.856247 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:24:15.856252 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:24:15.856259 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:24:15.856264 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:24:15.856270 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:24:15.856275 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:24:15.856281 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:24:15.856286 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:24:15.856291 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:24:15.856297 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:24:15.856303 kernel: iommu: Default domain type: Translated Apr 21 10:24:15.856309 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:24:15.856314 kernel: efivars: Registered efivars operations Apr 21 10:24:15.856319 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:24:15.856325 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:24:15.856330 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 10:24:15.856336 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 21 10:24:15.856341 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 21 10:24:15.856347 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 21 10:24:15.856403 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:24:15.856458 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:24:15.856514 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:24:15.856521 kernel: vgaarb: loaded Apr 21 10:24:15.856527 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:24:15.856532 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:24:15.856537 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:24:15.856543 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:24:15.856548 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:24:15.856556 kernel: pnp: PnP ACPI init Apr 21 10:24:15.856615 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:24:15.856623 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:24:15.856629 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:24:15.856634 kernel: NET: Registered PF_INET protocol family Apr 21 10:24:15.856640 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:24:15.856645 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:24:15.856651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:24:15.856656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:24:15.856664 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:24:15.856669 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:24:15.856675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:24:15.856680 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:24:15.856686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:24:15.856691 kernel: NET: Registered PF_XDP protocol family Apr 21 10:24:15.856768 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 21 10:24:15.856825 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 21 10:24:15.856879 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:24:15.856929 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:24:15.856977 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:24:15.857093 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:24:15.857161 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:24:15.857210 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 21 10:24:15.857217 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:24:15.857223 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:24:15.857232 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:24:15.857273 kernel: Initialise system trusted keyrings Apr 21 10:24:15.857280 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:24:15.857286 kernel: Key type asymmetric registered Apr 21 10:24:15.857291 kernel: Asymmetric key parser 'x509' registered Apr 21 10:24:15.857296 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:24:15.857302 kernel: io scheduler mq-deadline registered Apr 21 10:24:15.857307 kernel: io scheduler kyber registered Apr 21 10:24:15.857313 kernel: io scheduler bfq registered Apr 21 10:24:15.857320 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:24:15.857326 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:24:15.857332 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:24:15.857337 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:24:15.857343 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:24:15.857348 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:24:15.857354 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:24:15.857359 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:24:15.857365 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:24:15.857430 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:24:15.857438 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:24:15.857488 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:24:15.857540 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:24:15 UTC (1776767055) Apr 21 10:24:15.857591 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 10:24:15.857598 kernel: intel_pstate: CPU model not supported Apr 21 10:24:15.857603 kernel: efifb: probing for efifb Apr 21 10:24:15.857609 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 21 10:24:15.857616 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 21 10:24:15.857622 kernel: efifb: scrolling: redraw Apr 21 10:24:15.857627 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 21 10:24:15.857633 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:24:15.857638 kernel: fb0: EFI VGA frame buffer device Apr 21 10:24:15.857655 kernel: pstore: Using crash dump compression: deflate Apr 21 10:24:15.857663 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:24:15.857669 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:24:15.857674 kernel: Segment Routing with IPv6 Apr 21 10:24:15.857681 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:24:15.857687 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:24:15.857693 kernel: Key type dns_resolver registered Apr 21 10:24:15.857718 kernel: IPI shorthand broadcast: enabled Apr 21 10:24:15.857723 kernel: sched_clock: Marking stable (647008228, 196354345)->(946249286, -102886713) Apr 21 10:24:15.857729 kernel: registered taskstats version 1 Apr 21 10:24:15.857734 kernel: Loading compiled-in X.509 certificates Apr 21 10:24:15.857740 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:24:15.857746 kernel: Key type .fscrypt registered Apr 21 10:24:15.857753 kernel: Key type fscrypt-provisioning registered Apr 21 10:24:15.857758 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:24:15.857764 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:24:15.857769 kernel: ima: No architecture policies found Apr 21 10:24:15.857775 kernel: clk: Disabling unused clocks Apr 21 10:24:15.857781 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:24:15.857787 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:24:15.857792 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:24:15.857798 kernel: Run /init as init process Apr 21 10:24:15.857805 kernel: with arguments: Apr 21 10:24:15.857810 kernel: /init Apr 21 10:24:15.857816 kernel: with environment: Apr 21 10:24:15.857821 kernel: HOME=/ Apr 21 10:24:15.857827 kernel: TERM=linux Apr 21 10:24:15.857834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:24:15.857842 systemd[1]: Detected virtualization kvm. Apr 21 10:24:15.857849 systemd[1]: Detected architecture x86-64. Apr 21 10:24:15.857855 systemd[1]: Running in initrd. Apr 21 10:24:15.857861 systemd[1]: No hostname configured, using default hostname. Apr 21 10:24:15.857867 systemd[1]: Hostname set to . Apr 21 10:24:15.857873 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:24:15.857881 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:24:15.857887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:24:15.857893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:24:15.857900 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:24:15.857906 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:24:15.857912 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:24:15.857918 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:24:15.857925 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:24:15.857933 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:24:15.857939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:24:15.857945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:24:15.857950 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:24:15.857956 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:24:15.857962 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:24:15.857968 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:24:15.857974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:24:15.857981 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:24:15.857989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:24:15.857994 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:24:15.858000 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:24:15.858027 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:24:15.858034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:24:15.858039 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:24:15.858045 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:24:15.858053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:24:15.858059 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:24:15.858065 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:24:15.858071 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:24:15.858077 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:24:15.858083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:15.858089 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:24:15.858106 systemd-journald[194]: Collecting audit messages is disabled. Apr 21 10:24:15.858123 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:24:15.858129 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:24:15.858137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:24:15.858145 systemd-journald[194]: Journal started Apr 21 10:24:15.858159 systemd-journald[194]: Runtime Journal (/run/log/journal/3668c3ac861942fc8b302c207b5c47dd) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:24:15.860047 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:24:15.861077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:24:15.861278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:24:15.862154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:24:15.876678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:24:15.877693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:24:15.884062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:15.892127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:24:15.895401 systemd-modules-load[195]: Inserted module 'overlay' Apr 21 10:24:15.905756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:15.913116 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:24:15.920080 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:24:15.921852 dracut-cmdline[224]: dracut-dracut-053 Apr 21 10:24:15.923423 kernel: Bridge firewalling registered Apr 21 10:24:15.922769 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 21 10:24:15.923452 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:24:15.927742 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:24:15.927835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:24:15.941462 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:24:15.943258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:24:15.964533 systemd-resolved[259]: Positive Trust Anchors: Apr 21 10:24:15.964560 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:24:15.964584 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:24:15.966459 systemd-resolved[259]: Defaulting to hostname 'linux'. Apr 21 10:24:15.967150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:24:15.968440 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:24:16.014055 kernel: SCSI subsystem initialized Apr 21 10:24:16.022054 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:24:16.031051 kernel: iscsi: registered transport (tcp) Apr 21 10:24:16.048596 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:24:16.048632 kernel: QLogic iSCSI HBA Driver Apr 21 10:24:16.080243 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:24:16.093198 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:24:16.116787 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:24:16.116824 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:24:16.118244 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:24:16.155064 kernel: raid6: avx512x4 gen() 43661 MB/s Apr 21 10:24:16.172055 kernel: raid6: avx512x2 gen() 43530 MB/s Apr 21 10:24:16.189057 kernel: raid6: avx512x1 gen() 43357 MB/s Apr 21 10:24:16.206054 kernel: raid6: avx2x4 gen() 37816 MB/s Apr 21 10:24:16.223056 kernel: raid6: avx2x2 gen() 37469 MB/s Apr 21 10:24:16.240764 kernel: raid6: avx2x1 gen() 28278 MB/s Apr 21 10:24:16.240780 kernel: raid6: using algorithm avx512x4 gen() 43661 MB/s Apr 21 10:24:16.258757 kernel: raid6: .... xor() 9742 MB/s, rmw enabled Apr 21 10:24:16.258805 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:24:16.277168 kernel: xor: automatically using best checksumming function avx Apr 21 10:24:16.396052 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:24:16.405309 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:24:16.414199 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:24:16.423424 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 21 10:24:16.425968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:24:16.431147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:24:16.443829 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Apr 21 10:24:16.466161 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:24:16.480188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:24:16.509230 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:24:16.515167 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:24:16.523505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:24:16.525449 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:24:16.528509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:24:16.530250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:24:16.544040 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:24:16.540226 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:24:16.550218 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:24:16.552219 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:24:16.550452 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:24:16.558654 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:24:16.558681 kernel: GPT:9289727 != 19775487 Apr 21 10:24:16.558689 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:24:16.558542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:24:16.562934 kernel: GPT:9289727 != 19775487 Apr 21 10:24:16.562947 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:24:16.562954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:24:16.558612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:16.563653 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:24:16.563871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:24:16.564385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:16.577377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:16.588045 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:24:16.588069 kernel: libata version 3.00 loaded. Apr 21 10:24:16.588077 kernel: AES CTR mode by8 optimization enabled Apr 21 10:24:16.593305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:16.598062 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Apr 21 10:24:16.601037 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (460) Apr 21 10:24:16.606372 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:24:16.606500 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:24:16.608267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:16.615860 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:24:16.615970 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:24:16.616108 kernel: scsi host0: ahci Apr 21 10:24:16.616185 kernel: scsi host1: ahci Apr 21 10:24:16.616251 kernel: scsi host2: ahci Apr 21 10:24:16.616314 kernel: scsi host3: ahci Apr 21 10:24:16.621618 kernel: scsi host4: ahci Apr 21 10:24:16.621720 kernel: scsi host5: ahci Apr 21 10:24:16.621793 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 21 10:24:16.621801 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 21 10:24:16.621808 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 21 10:24:16.616331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:24:16.628834 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 21 10:24:16.628850 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 21 10:24:16.628857 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 21 10:24:16.630776 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:24:16.634264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:24:16.640408 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:24:16.641349 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:24:16.658150 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:24:16.658878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:24:16.658922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:16.662352 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:16.665557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:16.675459 disk-uuid[558]: Primary Header is updated. Apr 21 10:24:16.675459 disk-uuid[558]: Secondary Entries is updated. Apr 21 10:24:16.675459 disk-uuid[558]: Secondary Header is updated. Apr 21 10:24:16.679433 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:24:16.683034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:24:16.683562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:16.697191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:24:16.712524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:16.934036 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:24:16.934111 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:24:16.934120 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:24:16.937038 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:24:16.937083 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:24:16.938053 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:24:16.939877 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:24:16.939887 kernel: ata3.00: applying bridge limits Apr 21 10:24:16.941526 kernel: ata3.00: configured for UDMA/100 Apr 21 10:24:16.942035 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:24:16.987972 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:24:16.988189 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:24:17.003083 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:24:17.687058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:24:17.687143 disk-uuid[560]: The operation has completed successfully. Apr 21 10:24:17.707768 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:24:17.707858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:24:17.726162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:24:17.730278 sh[598]: Success Apr 21 10:24:17.740027 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:24:17.765361 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:24:17.774167 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:24:17.776738 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:24:17.786512 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:24:17.786537 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:17.786545 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:24:17.788040 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:24:17.789200 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:24:17.794226 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:24:17.796945 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:24:17.809161 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:24:17.811339 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:24:17.820332 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:17.820359 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:17.820370 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:24:17.824048 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:24:17.830183 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:24:17.834051 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:17.839508 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:24:17.847150 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:24:17.887318 ignition[693]: Ignition 2.19.0 Apr 21 10:24:17.887333 ignition[693]: Stage: fetch-offline Apr 21 10:24:17.887355 ignition[693]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:17.887360 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:24:17.887416 ignition[693]: parsed url from cmdline: "" Apr 21 10:24:17.887418 ignition[693]: no config URL provided Apr 21 10:24:17.887421 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:24:17.887426 ignition[693]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:24:17.887441 ignition[693]: op(1): [started] loading QEMU firmware config module Apr 21 10:24:17.887445 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:24:17.899876 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:24:17.905650 ignition[693]: op(1): [finished] loading QEMU firmware config module Apr 21 10:24:17.908170 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:24:17.922892 systemd-networkd[786]: lo: Link UP Apr 21 10:24:17.922914 systemd-networkd[786]: lo: Gained carrier Apr 21 10:24:17.923765 systemd-networkd[786]: Enumeration completed Apr 21 10:24:17.923833 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:24:17.924308 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:24:17.924310 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:24:17.925279 systemd-networkd[786]: eth0: Link UP Apr 21 10:24:17.925281 systemd-networkd[786]: eth0: Gained carrier Apr 21 10:24:17.925286 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:24:17.928347 systemd[1]: Reached target network.target - Network. Apr 21 10:24:17.943093 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:24:18.042943 ignition[693]: parsing config with SHA512: 96ce448223044d7e5e92dacb83b697cd290d5a650cfbfca09a3f888b7c4f43eb60c7b3e8c936ee1a29e4cd130259cb16dcb36d99e006af8e2d85c8bc18a6300a Apr 21 10:24:18.046179 unknown[693]: fetched base config from "system" Apr 21 10:24:18.046188 unknown[693]: fetched user config from "qemu" Apr 21 10:24:18.046823 ignition[693]: fetch-offline: fetch-offline passed Apr 21 10:24:18.048775 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:24:18.046886 ignition[693]: Ignition finished successfully Apr 21 10:24:18.051423 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:24:18.059253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:24:18.069650 ignition[790]: Ignition 2.19.0 Apr 21 10:24:18.069665 ignition[790]: Stage: kargs Apr 21 10:24:18.069808 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:18.069816 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:24:18.070422 ignition[790]: kargs: kargs passed Apr 21 10:24:18.070450 ignition[790]: Ignition finished successfully Apr 21 10:24:18.074600 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:24:18.084162 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:24:18.098283 ignition[798]: Ignition 2.19.0 Apr 21 10:24:18.098297 ignition[798]: Stage: disks Apr 21 10:24:18.098454 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:18.098462 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:24:18.099875 ignition[798]: disks: disks passed Apr 21 10:24:18.099904 ignition[798]: Ignition finished successfully Apr 21 10:24:18.105742 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:24:18.106484 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:24:18.109328 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:24:18.112444 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:24:18.116321 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:24:18.119127 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:24:18.128170 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:24:18.138895 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:24:18.143311 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:24:18.146611 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:24:18.225322 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:24:18.225134 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:24:18.226228 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:24:18.241151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:24:18.244703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:24:18.248049 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Apr 21 10:24:18.248838 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:24:18.258352 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:18.258370 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:18.258379 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:24:18.258386 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:24:18.248871 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:24:18.248889 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:24:18.257056 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:24:18.265548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:24:18.269448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:24:18.292786 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:24:18.295866 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:24:18.300150 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:24:18.303883 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:24:18.368606 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:24:18.388131 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:24:18.391946 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:24:18.399051 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:18.411547 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:24:18.416917 ignition[933]: INFO : Ignition 2.19.0 Apr 21 10:24:18.416917 ignition[933]: INFO : Stage: mount Apr 21 10:24:18.420771 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:18.420771 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:24:18.420771 ignition[933]: INFO : mount: mount passed Apr 21 10:24:18.420771 ignition[933]: INFO : Ignition finished successfully Apr 21 10:24:18.418690 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:24:18.428179 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:24:18.784965 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:24:18.797221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:24:18.804811 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 21 10:24:18.804840 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:24:18.804850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:24:18.807105 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:24:18.810050 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:24:18.811123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:24:18.831528 ignition[962]: INFO : Ignition 2.19.0 Apr 21 10:24:18.831528 ignition[962]: INFO : Stage: files Apr 21 10:24:18.833956 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:18.833956 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:24:18.833956 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:24:18.833956 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:24:18.833956 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:24:18.843467 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:24:18.843467 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:24:18.843467 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:24:18.843467 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:24:18.843467 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:24:18.835087 unknown[962]: wrote ssh authorized keys file for user: core Apr 21 10:24:18.896686 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:24:18.973571 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:24:18.973571 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:24:18.981377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 10:24:19.211217 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:24:19.282313 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:24:19.282313 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:19.287978 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:24:19.546332 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:24:19.749259 systemd-networkd[786]: eth0: Gained IPv6LL Apr 21 10:24:19.901566 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:24:19.901566 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 10:24:19.907128 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:24:19.910223 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:24:19.910223 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 10:24:19.910223 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 10:24:19.910223 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:24:19.919656 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:24:19.919656 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 10:24:19.919656 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:24:19.937580 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:24:19.941456 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:24:19.943811 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:24:19.943811 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:24:19.943811 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:24:19.943811 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:24:19.943811 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:24:19.943811 ignition[962]: INFO : files: files passed Apr 21 10:24:19.943811 ignition[962]: INFO : Ignition finished successfully Apr 21 10:24:19.958105 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:24:19.973228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:24:19.977462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:24:19.981424 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:24:19.982919 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:24:19.986342 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:24:19.988529 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:24:19.988529 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:24:19.993180 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:24:19.993504 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:24:19.994880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:24:20.015234 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:24:20.038404 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:24:20.038539 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:24:20.042503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:24:20.044395 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:24:20.050738 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:24:20.063275 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:24:20.076850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:24:20.079383 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:24:20.092833 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:24:20.093904 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:24:20.098295 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:24:20.101337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:24:20.101522 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:24:20.105979 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:24:20.108974 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:24:20.109867 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:24:20.113614 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:24:20.119459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:24:20.120562 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:24:20.123590 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:24:20.129737 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:24:20.130593 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:24:20.133632 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:24:20.138206 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:24:20.138375 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:24:20.142824 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:24:20.143583 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:24:20.149491 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:24:20.151127 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:24:20.155139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:24:20.155316 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:24:20.159658 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:24:20.159799 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:24:20.160706 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:24:20.165832 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:24:20.171146 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:24:20.175781 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:24:20.176497 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:24:20.179057 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:24:20.179138 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:24:20.181655 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:24:20.181747 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:24:20.186531 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:24:20.186671 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:24:20.189542 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:24:20.189628 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:24:20.205243 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:24:20.208710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:24:20.211789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:24:20.211972 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:24:20.214745 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:24:20.214877 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:24:20.222439 ignition[1016]: INFO : Ignition 2.19.0 Apr 21 10:24:20.222439 ignition[1016]: INFO : Stage: umount Apr 21 10:24:20.222439 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:24:20.222439 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:24:20.222439 ignition[1016]: INFO : umount: umount passed Apr 21 10:24:20.222439 ignition[1016]: INFO : Ignition finished successfully Apr 21 10:24:20.222767 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:24:20.222852 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:24:20.225948 systemd[1]: Stopped target network.target - Network. Apr 21 10:24:20.228112 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:24:20.228195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:24:20.231241 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:24:20.231307 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:24:20.234557 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:24:20.234593 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:24:20.238262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:24:20.238306 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:24:20.241705 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:24:20.244886 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:24:20.248825 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:24:20.249524 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:24:20.249621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:24:20.259829 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:24:20.259959 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:24:20.261461 systemd-networkd[786]: eth0: DHCPv6 lease lost Apr 21 10:24:20.278230 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:24:20.278338 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:24:20.284641 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:24:20.284684 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:24:20.290221 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:24:20.291814 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:24:20.291861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:24:20.298247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:24:20.298286 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:24:20.301856 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:24:20.301890 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:24:20.305168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:24:20.305198 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:24:20.310089 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:24:20.311702 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:24:20.311805 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:24:20.319952 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:24:20.319992 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:24:20.332164 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:24:20.332247 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:24:20.360555 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:24:20.362205 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:24:20.366003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:24:20.366096 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:24:20.369826 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:24:20.369858 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:24:20.370836 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:24:20.370871 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:24:20.378756 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:24:20.378791 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:24:20.382751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:24:20.382814 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:24:20.398294 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:24:20.403120 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:24:20.403194 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:24:20.408065 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:24:20.408113 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:24:20.408883 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:24:20.408929 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:24:20.414814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:24:20.414864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:20.419269 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:24:20.419358 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:24:20.422742 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:24:20.426492 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:24:20.437820 systemd[1]: Switching root. Apr 21 10:24:20.459653 systemd-journald[194]: Journal stopped Apr 21 10:24:21.264423 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 21 10:24:21.264471 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:24:21.264486 kernel: SELinux: policy capability open_perms=1 Apr 21 10:24:21.264494 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:24:21.264502 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:24:21.264512 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:24:21.264520 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:24:21.264530 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:24:21.264540 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:24:21.264547 kernel: audit: type=1403 audit(1776767060.601:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:24:21.264557 systemd[1]: Successfully loaded SELinux policy in 40.466ms. Apr 21 10:24:21.264567 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.336ms. Apr 21 10:24:21.264578 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:24:21.264586 systemd[1]: Detected virtualization kvm. Apr 21 10:24:21.264595 systemd[1]: Detected architecture x86-64. Apr 21 10:24:21.264602 systemd[1]: Detected first boot. Apr 21 10:24:21.264612 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:24:21.264620 zram_generator::config[1059]: No configuration found. Apr 21 10:24:21.264630 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:24:21.264638 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:24:21.264645 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:24:21.264653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:24:21.264661 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:24:21.264669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:24:21.264679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:24:21.264687 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:24:21.264694 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:24:21.264705 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:24:21.264712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:24:21.264747 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:24:21.264756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:24:21.264764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:24:21.264771 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:24:21.264781 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:24:21.264789 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:24:21.264797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:24:21.264805 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:24:21.264812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:24:21.264820 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:24:21.264829 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:24:21.264837 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:24:21.264846 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:24:21.264853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:24:21.264862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:24:21.264869 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:24:21.264877 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:24:21.264885 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:24:21.264893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:24:21.264901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:24:21.264908 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:24:21.264917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:24:21.264925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:24:21.264932 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:24:21.264940 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:24:21.264948 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:24:21.264956 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:24:21.264963 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:24:21.264971 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:24:21.264981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:24:21.264990 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:24:21.264998 systemd[1]: Reached target machines.target - Containers. Apr 21 10:24:21.265051 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:24:21.265060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:24:21.265068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:24:21.265077 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:24:21.265085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:24:21.265092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:24:21.265127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:24:21.265135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:24:21.265143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:24:21.265151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:24:21.265159 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:24:21.265166 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:24:21.265174 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:24:21.265182 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:24:21.265192 kernel: fuse: init (API version 7.39) Apr 21 10:24:21.265200 kernel: loop: module loaded Apr 21 10:24:21.265207 kernel: ACPI: bus type drm_connector registered Apr 21 10:24:21.265214 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:24:21.265221 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:24:21.265229 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:24:21.265238 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:24:21.265258 systemd-journald[1140]: Collecting audit messages is disabled. Apr 21 10:24:21.265275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:24:21.265285 systemd-journald[1140]: Journal started Apr 21 10:24:21.265301 systemd-journald[1140]: Runtime Journal (/run/log/journal/3668c3ac861942fc8b302c207b5c47dd) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:24:20.977273 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:24:20.990117 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:24:20.990535 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:24:21.267362 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:24:21.268629 systemd[1]: Stopped verity-setup.service. Apr 21 10:24:21.274053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:24:21.276171 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:24:21.277865 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:24:21.279618 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:24:21.281508 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:24:21.283081 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:24:21.284797 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:24:21.286549 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:24:21.288342 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:24:21.290355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:24:21.292386 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:24:21.292523 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:24:21.294699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:24:21.294874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:24:21.296896 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:24:21.297052 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:24:21.298854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:24:21.298981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:24:21.301040 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:24:21.301153 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:24:21.302916 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:24:21.303071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:24:21.304968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:24:21.306905 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:24:21.309058 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:24:21.313937 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:24:21.320974 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:24:21.331218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:24:21.334438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:24:21.336175 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:24:21.336208 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:24:21.338587 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:24:21.340410 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:24:21.343064 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:24:21.343817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:24:21.344701 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:24:21.347442 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:24:21.348359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:24:21.349082 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:24:21.350943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:24:21.351661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:24:21.356851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:24:21.361070 systemd-journald[1140]: Time spent on flushing to /var/log/journal/3668c3ac861942fc8b302c207b5c47dd is 24.069ms for 1002 entries. Apr 21 10:24:21.361070 systemd-journald[1140]: System Journal (/var/log/journal/3668c3ac861942fc8b302c207b5c47dd) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:24:21.408487 systemd-journald[1140]: Received client request to flush runtime journal. Apr 21 10:24:21.408526 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:24:21.408545 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:24:21.360193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:24:21.365487 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:24:21.367488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:24:21.370484 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:24:21.372817 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:24:21.375756 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:24:21.381248 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:24:21.393160 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:24:21.398576 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:24:21.399524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:24:21.404526 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Apr 21 10:24:21.404534 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Apr 21 10:24:21.408901 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:24:21.412165 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:24:21.422649 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:24:21.424859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:24:21.425351 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:24:21.439038 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:24:21.447689 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:24:21.456197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:24:21.469477 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Apr 21 10:24:21.469750 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Apr 21 10:24:21.473196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:24:21.480484 kernel: loop2: detected capacity change from 0 to 217752 Apr 21 10:24:21.519037 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:24:21.530067 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 10:24:21.540050 kernel: loop5: detected capacity change from 0 to 217752 Apr 21 10:24:21.546440 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:24:21.546793 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 21 10:24:21.551287 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:24:21.551298 systemd[1]: Reloading... Apr 21 10:24:21.595088 zram_generator::config[1228]: No configuration found. Apr 21 10:24:21.638412 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:24:21.665129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:24:21.693837 systemd[1]: Reloading finished in 142 ms. Apr 21 10:24:21.727316 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:24:21.730183 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:24:21.746262 systemd[1]: Starting ensure-sysext.service... Apr 21 10:24:21.748918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:24:21.751126 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:24:21.754578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:24:21.759185 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:24:21.759221 systemd[1]: Reloading... Apr 21 10:24:21.764508 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:24:21.764854 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:24:21.765533 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:24:21.765708 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 21 10:24:21.765774 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 21 10:24:21.767962 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:24:21.767985 systemd-tmpfiles[1268]: Skipping /boot Apr 21 10:24:21.773156 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:24:21.773405 systemd-tmpfiles[1268]: Skipping /boot Apr 21 10:24:21.775632 systemd-udevd[1270]: Using default interface naming scheme 'v255'. Apr 21 10:24:21.793051 zram_generator::config[1297]: No configuration found. Apr 21 10:24:21.836058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1313) Apr 21 10:24:21.864134 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:24:21.888060 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:24:21.890045 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:24:21.900455 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:24:21.900650 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:24:21.900803 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:24:21.900905 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:24:21.896080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:24:21.921497 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:24:21.943428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:24:21.945613 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:24:21.945859 systemd[1]: Reloading finished in 186 ms. Apr 21 10:24:22.024545 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:24:22.040479 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:24:22.057303 systemd[1]: Finished ensure-sysext.service. Apr 21 10:24:22.058878 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:24:22.072498 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:24:22.084187 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:24:22.087408 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:24:22.089338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:24:22.091180 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:24:22.093926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:24:22.097559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:24:22.100115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:24:22.105520 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:24:22.102925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:24:22.104657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:24:22.106353 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:24:22.110246 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:24:22.113571 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:24:22.118181 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:24:22.121184 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:24:22.123660 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:24:22.126489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:24:22.128289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:24:22.128948 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:24:22.131327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:24:22.132855 augenrules[1398]: No rules Apr 21 10:24:22.131421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:24:22.133645 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:24:22.135921 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:24:22.136204 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:24:22.138500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:24:22.138716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:24:22.141318 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:24:22.141482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:24:22.143925 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:24:22.146636 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:24:22.154841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:24:22.167310 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:24:22.169159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:24:22.169286 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:24:22.170343 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:24:22.173115 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:24:22.173218 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:24:22.175536 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:24:22.178520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:24:22.181375 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:24:22.184523 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:24:22.187877 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:24:22.199405 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:24:22.202264 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:24:22.244344 systemd-networkd[1389]: lo: Link UP Apr 21 10:24:22.244367 systemd-networkd[1389]: lo: Gained carrier Apr 21 10:24:22.245214 systemd-networkd[1389]: Enumeration completed Apr 21 10:24:22.245330 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:24:22.245650 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:24:22.245653 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:24:22.246306 systemd-networkd[1389]: eth0: Link UP Apr 21 10:24:22.246309 systemd-networkd[1389]: eth0: Gained carrier Apr 21 10:24:22.246318 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:24:22.247411 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:24:22.249407 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:24:22.252620 systemd-resolved[1393]: Positive Trust Anchors: Apr 21 10:24:22.252650 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:24:22.252676 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:24:22.255553 systemd-resolved[1393]: Defaulting to hostname 'linux'. Apr 21 10:24:22.271179 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:24:22.273281 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:24:22.274071 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:24:22.274881 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Apr 21 10:24:22.275123 systemd[1]: Reached target network.target - Network. Apr 21 10:24:22.904870 systemd-resolved[1393]: Clock change detected. Flushing caches. Apr 21 10:24:22.904884 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:24:22.904908 systemd-timesyncd[1394]: Initial clock synchronization to Tue 2026-04-21 10:24:22.904821 UTC. Apr 21 10:24:22.905967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:24:22.907955 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:24:22.909864 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:24:22.911727 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:24:22.913714 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:24:22.915396 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:24:22.917283 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:24:22.919125 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:24:22.919160 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:24:22.920493 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:24:22.922406 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:24:22.925077 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:24:22.941867 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:24:22.944346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:24:22.946127 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:24:22.947584 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:24:22.949016 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:24:22.949048 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:24:22.949860 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:24:22.952254 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:24:22.954404 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:24:22.957399 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:24:22.960349 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:24:22.961139 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:24:22.963349 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:24:22.967206 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:24:22.970424 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:24:22.972113 jq[1438]: false Apr 21 10:24:22.976393 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:24:22.976483 dbus-daemon[1437]: [system] SELinux support is enabled Apr 21 10:24:22.978237 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:24:22.978590 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:24:22.983442 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:24:22.986596 extend-filesystems[1439]: Found loop3 Apr 21 10:24:22.986596 extend-filesystems[1439]: Found loop4 Apr 21 10:24:22.986596 extend-filesystems[1439]: Found loop5 Apr 21 10:24:22.986596 extend-filesystems[1439]: Found sr0 Apr 21 10:24:22.986596 extend-filesystems[1439]: Found vda Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda1 Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda2 Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda3 Apr 21 10:24:22.997047 extend-filesystems[1439]: Found usr Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda4 Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda6 Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda7 Apr 21 10:24:22.997047 extend-filesystems[1439]: Found vda9 Apr 21 10:24:22.997047 extend-filesystems[1439]: Checking size of /dev/vda9 Apr 21 10:24:23.019109 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:24:22.990184 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:24:23.019184 update_engine[1451]: I20260421 10:24:22.995080 1451 main.cc:92] Flatcar Update Engine starting Apr 21 10:24:23.019184 update_engine[1451]: I20260421 10:24:22.995962 1451 update_check_scheduler.cc:74] Next update check in 2m47s Apr 21 10:24:23.019860 extend-filesystems[1439]: Resized partition /dev/vda9 Apr 21 10:24:22.996847 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:24:23.022739 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:24:23.007628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:24:23.007751 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:24:23.007934 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:24:23.008076 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:24:23.013657 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:24:23.014374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:24:23.027296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1316) Apr 21 10:24:23.028202 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:24:23.029348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:24:23.032708 jq[1456]: true Apr 21 10:24:23.033932 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:24:23.034045 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:24:23.039826 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:24:23.054010 tar[1462]: linux-amd64/LICENSE Apr 21 10:24:23.054209 jq[1470]: true Apr 21 10:24:23.055215 tar[1462]: linux-amd64/helm Apr 21 10:24:23.074107 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:24:23.072674 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:24:23.079435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:24:23.095605 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:24:23.095623 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:24:23.096619 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:24:23.096619 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:24:23.096619 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:24:23.116293 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Apr 21 10:24:23.124202 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:24:23.103105 systemd-logind[1447]: New seat seat0. Apr 21 10:24:23.104802 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:24:23.104936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:24:23.113746 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:24:23.118704 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:24:23.119134 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:24:23.130453 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:24:23.217909 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:24:23.219383 containerd[1463]: time="2026-04-21T10:24:23.219300132Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:24:23.236093 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:24:23.238587 containerd[1463]: time="2026-04-21T10:24:23.238544519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240553237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240587708Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240605251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240719372Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240733124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240777263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240822 containerd[1463]: time="2026-04-21T10:24:23.240787549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240992 containerd[1463]: time="2026-04-21T10:24:23.240909301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240992 containerd[1463]: time="2026-04-21T10:24:23.240920845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240992 containerd[1463]: time="2026-04-21T10:24:23.240933694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:24:23.240992 containerd[1463]: time="2026-04-21T10:24:23.240943324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.241064 containerd[1463]: time="2026-04-21T10:24:23.241028740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.241237 containerd[1463]: time="2026-04-21T10:24:23.241202138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:24:23.241530 containerd[1463]: time="2026-04-21T10:24:23.241442751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:24:23.241530 containerd[1463]: time="2026-04-21T10:24:23.241459506Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:24:23.241530 containerd[1463]: time="2026-04-21T10:24:23.241518656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:24:23.241575 containerd[1463]: time="2026-04-21T10:24:23.241545289Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:24:23.244508 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:24:23.250647 containerd[1463]: time="2026-04-21T10:24:23.250615257Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:24:23.250694 containerd[1463]: time="2026-04-21T10:24:23.250672156Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:24:23.250694 containerd[1463]: time="2026-04-21T10:24:23.250686153Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:24:23.250723 containerd[1463]: time="2026-04-21T10:24:23.250697969Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:24:23.250723 containerd[1463]: time="2026-04-21T10:24:23.250708332Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:24:23.250845 containerd[1463]: time="2026-04-21T10:24:23.250808206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:24:23.251095 containerd[1463]: time="2026-04-21T10:24:23.251077593Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:24:23.251182 containerd[1463]: time="2026-04-21T10:24:23.251163024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:24:23.251201 containerd[1463]: time="2026-04-21T10:24:23.251186906Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:24:23.251201 containerd[1463]: time="2026-04-21T10:24:23.251197353Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:24:23.251191 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:24:23.251290 containerd[1463]: time="2026-04-21T10:24:23.251209533Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251290 containerd[1463]: time="2026-04-21T10:24:23.251219764Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251290 containerd[1463]: time="2026-04-21T10:24:23.251228569Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251290 containerd[1463]: time="2026-04-21T10:24:23.251238082Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251290 containerd[1463]: time="2026-04-21T10:24:23.251247791Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251290 containerd[1463]: time="2026-04-21T10:24:23.251287963Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251303590Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251313844Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251331048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251340577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251349300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251359045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251369 containerd[1463]: time="2026-04-21T10:24:23.251368056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251460 containerd[1463]: time="2026-04-21T10:24:23.251377546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251460 containerd[1463]: time="2026-04-21T10:24:23.251385877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251460 containerd[1463]: time="2026-04-21T10:24:23.251394389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251460 containerd[1463]: time="2026-04-21T10:24:23.251402754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251460 containerd[1463]: time="2026-04-21T10:24:23.251414565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251460 containerd[1463]: time="2026-04-21T10:24:23.251422622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.251379 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253007015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253025330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253037162Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253058322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253067692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253075751Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253112860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253126217Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253134355Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253144308Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253151776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253160158Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253167355Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:24:23.253244 containerd[1463]: time="2026-04-21T10:24:23.253174967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:24:23.253593 containerd[1463]: time="2026-04-21T10:24:23.253401022Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:24:23.253593 containerd[1463]: time="2026-04-21T10:24:23.253459961Z" level=info msg="Connect containerd service" Apr 21 10:24:23.253593 containerd[1463]: time="2026-04-21T10:24:23.253484697Z" level=info msg="using legacy CRI server" Apr 21 10:24:23.253593 containerd[1463]: time="2026-04-21T10:24:23.253489348Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:24:23.253593 containerd[1463]: time="2026-04-21T10:24:23.253554233Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:24:23.254014 containerd[1463]: time="2026-04-21T10:24:23.253932138Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:24:23.254324 containerd[1463]: time="2026-04-21T10:24:23.254173592Z" level=info msg="Start subscribing containerd event" Apr 21 10:24:23.254324 containerd[1463]: time="2026-04-21T10:24:23.254210786Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:24:23.254324 containerd[1463]: time="2026-04-21T10:24:23.254221955Z" level=info msg="Start recovering state" Apr 21 10:24:23.254324 containerd[1463]: time="2026-04-21T10:24:23.254239915Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:24:23.254577 containerd[1463]: time="2026-04-21T10:24:23.254486278Z" level=info msg="Start event monitor" Apr 21 10:24:23.254577 containerd[1463]: time="2026-04-21T10:24:23.254504976Z" level=info msg="Start snapshots syncer" Apr 21 10:24:23.254577 containerd[1463]: time="2026-04-21T10:24:23.254541714Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:24:23.254577 containerd[1463]: time="2026-04-21T10:24:23.254548194Z" level=info msg="Start streaming server" Apr 21 10:24:23.254790 containerd[1463]: time="2026-04-21T10:24:23.254781396Z" level=info msg="containerd successfully booted in 0.036602s" Apr 21 10:24:23.255068 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:24:23.256874 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:24:23.266648 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:24:23.270050 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:24:23.272531 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:24:23.274352 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:24:23.469961 tar[1462]: linux-amd64/README.md Apr 21 10:24:23.493363 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:24:24.730602 systemd-networkd[1389]: eth0: Gained IPv6LL Apr 21 10:24:24.733314 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:24:24.736108 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:24:24.744542 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:24:24.747742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:24.750836 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:24:24.765594 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:24:24.765790 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:24:24.768147 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:24:24.771652 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:24:26.056159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:26.058321 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:24:26.090468 kernel: hrtimer: interrupt took 5066996 ns Apr 21 10:24:26.104482 systemd[1]: Startup finished in 759ms (kernel) + 4.913s (initrd) + 4.911s (userspace) = 10.583s. Apr 21 10:24:26.160056 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:24:26.897062 kubelet[1549]: E0421 10:24:26.896901 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:24:26.899428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:24:26.899557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:24:26.899844 systemd[1]: kubelet.service: Consumed 1.954s CPU time. Apr 21 10:24:28.821175 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:24:28.822492 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:56934.service - OpenSSH per-connection server daemon (10.0.0.1:56934). Apr 21 10:24:28.863231 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 56934 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:28.865702 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:28.899714 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:24:28.910644 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:24:28.915731 systemd-logind[1447]: New session 1 of user core. Apr 21 10:24:28.933711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:24:28.946865 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:24:28.952376 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:24:29.030318 systemd[1566]: Queued start job for default target default.target. Apr 21 10:24:29.040126 systemd[1566]: Created slice app.slice - User Application Slice. Apr 21 10:24:29.040167 systemd[1566]: Reached target paths.target - Paths. Apr 21 10:24:29.040178 systemd[1566]: Reached target timers.target - Timers. Apr 21 10:24:29.041399 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:24:29.049960 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:24:29.050047 systemd[1566]: Reached target sockets.target - Sockets. Apr 21 10:24:29.050056 systemd[1566]: Reached target basic.target - Basic System. Apr 21 10:24:29.050081 systemd[1566]: Reached target default.target - Main User Target. Apr 21 10:24:29.050103 systemd[1566]: Startup finished in 86ms. Apr 21 10:24:29.050364 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:24:29.051445 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:24:29.109839 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). Apr 21 10:24:29.139177 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:29.140238 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:29.143857 systemd-logind[1447]: New session 2 of user core. Apr 21 10:24:29.160698 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:24:29.212558 sshd[1577]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:29.230812 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:56944.service: Deactivated successfully. Apr 21 10:24:29.231917 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:24:29.232906 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:24:29.233807 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:56952.service - OpenSSH per-connection server daemon (10.0.0.1:56952). Apr 21 10:24:29.234415 systemd-logind[1447]: Removed session 2. Apr 21 10:24:29.263212 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 56952 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:29.264351 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:29.268150 systemd-logind[1447]: New session 3 of user core. Apr 21 10:24:29.278664 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:24:29.326939 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:29.341653 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:56952.service: Deactivated successfully. Apr 21 10:24:29.342944 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:24:29.343963 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:24:29.348544 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:56964.service - OpenSSH per-connection server daemon (10.0.0.1:56964). Apr 21 10:24:29.349303 systemd-logind[1447]: Removed session 3. Apr 21 10:24:29.375309 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 56964 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:29.376655 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:29.380346 systemd-logind[1447]: New session 4 of user core. Apr 21 10:24:29.386447 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:24:29.437959 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:29.452414 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:56964.service: Deactivated successfully. Apr 21 10:24:29.453662 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:24:29.454593 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:24:29.465523 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:56968.service - OpenSSH per-connection server daemon (10.0.0.1:56968). Apr 21 10:24:29.466393 systemd-logind[1447]: Removed session 4. Apr 21 10:24:29.492162 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 56968 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:29.493210 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:29.496793 systemd-logind[1447]: New session 5 of user core. Apr 21 10:24:29.503495 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:24:29.558697 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:24:29.558917 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:24:29.578658 sudo[1601]: pam_unix(sudo:session): session closed for user root Apr 21 10:24:29.580664 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:29.595463 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:56968.service: Deactivated successfully. Apr 21 10:24:29.596675 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:24:29.597746 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:24:29.598828 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). Apr 21 10:24:29.599527 systemd-logind[1447]: Removed session 5. Apr 21 10:24:29.630861 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:29.631996 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:29.635840 systemd-logind[1447]: New session 6 of user core. Apr 21 10:24:29.650466 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:24:29.702844 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:24:29.703095 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:24:29.706941 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 21 10:24:29.711596 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:24:29.711805 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:24:29.729944 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:24:29.731249 auditctl[1613]: No rules Apr 21 10:24:29.731601 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:24:29.731799 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:24:29.733751 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:24:29.756585 augenrules[1631]: No rules Apr 21 10:24:29.757517 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:24:29.758409 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 21 10:24:29.759979 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:29.765859 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:56976.service: Deactivated successfully. Apr 21 10:24:29.766876 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:24:29.767899 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:24:29.768803 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:56992.service - OpenSSH per-connection server daemon (10.0.0.1:56992). Apr 21 10:24:29.769567 systemd-logind[1447]: Removed session 6. Apr 21 10:24:29.799570 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 56992 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:24:29.800636 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:29.804225 systemd-logind[1447]: New session 7 of user core. Apr 21 10:24:29.814454 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:24:29.866913 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:24:29.867537 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:24:30.467876 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:24:30.468154 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:24:31.460878 dockerd[1660]: time="2026-04-21T10:24:31.460699103Z" level=info msg="Starting up" Apr 21 10:24:31.702633 dockerd[1660]: time="2026-04-21T10:24:31.702556886Z" level=info msg="Loading containers: start." Apr 21 10:24:31.816294 kernel: Initializing XFRM netlink socket Apr 21 10:24:31.879378 systemd-networkd[1389]: docker0: Link UP Apr 21 10:24:31.903710 dockerd[1660]: time="2026-04-21T10:24:31.903658188Z" level=info msg="Loading containers: done." Apr 21 10:24:31.917555 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2059226173-merged.mount: Deactivated successfully. Apr 21 10:24:31.920978 dockerd[1660]: time="2026-04-21T10:24:31.920885952Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:24:31.921296 dockerd[1660]: time="2026-04-21T10:24:31.921225327Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:24:31.921418 dockerd[1660]: time="2026-04-21T10:24:31.921385564Z" level=info msg="Daemon has completed initialization" Apr 21 10:24:31.955695 dockerd[1660]: time="2026-04-21T10:24:31.955478695Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:24:31.955893 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:24:32.747574 containerd[1463]: time="2026-04-21T10:24:32.747512608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:24:33.502840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722064248.mount: Deactivated successfully. Apr 21 10:24:34.680343 containerd[1463]: time="2026-04-21T10:24:34.680163591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:34.680775 containerd[1463]: time="2026-04-21T10:24:34.680710247Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 21 10:24:34.681908 containerd[1463]: time="2026-04-21T10:24:34.681859767Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:34.684931 containerd[1463]: time="2026-04-21T10:24:34.684878569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:34.686690 containerd[1463]: time="2026-04-21T10:24:34.686619622Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.938255723s" Apr 21 10:24:34.686690 containerd[1463]: time="2026-04-21T10:24:34.686688741Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:24:34.688596 containerd[1463]: time="2026-04-21T10:24:34.688417492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:24:35.891568 containerd[1463]: time="2026-04-21T10:24:35.891472236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:35.892348 containerd[1463]: time="2026-04-21T10:24:35.891957124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 21 10:24:35.893686 containerd[1463]: time="2026-04-21T10:24:35.893643023Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:35.896956 containerd[1463]: time="2026-04-21T10:24:35.896905395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:35.898432 containerd[1463]: time="2026-04-21T10:24:35.898383164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.209941818s" Apr 21 10:24:35.898432 containerd[1463]: time="2026-04-21T10:24:35.898425187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:24:35.899330 containerd[1463]: time="2026-04-21T10:24:35.899295200Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:24:36.763059 containerd[1463]: time="2026-04-21T10:24:36.762981387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:36.763708 containerd[1463]: time="2026-04-21T10:24:36.763638286Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 21 10:24:36.764561 containerd[1463]: time="2026-04-21T10:24:36.764521232Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:36.766868 containerd[1463]: time="2026-04-21T10:24:36.766825675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:36.768072 containerd[1463]: time="2026-04-21T10:24:36.768002092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 868.674627ms" Apr 21 10:24:36.768072 containerd[1463]: time="2026-04-21T10:24:36.768054412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:24:36.769109 containerd[1463]: time="2026-04-21T10:24:36.769070695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:24:36.946486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:24:36.953679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:37.279697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:37.301719 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:24:37.566480 kubelet[1883]: E0421 10:24:37.563887 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:24:37.567101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:24:37.567226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:24:37.764198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506938435.mount: Deactivated successfully. Apr 21 10:24:37.949088 containerd[1463]: time="2026-04-21T10:24:37.948998356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:37.951482 containerd[1463]: time="2026-04-21T10:24:37.951427196Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 21 10:24:37.952607 containerd[1463]: time="2026-04-21T10:24:37.952548518Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:37.954139 containerd[1463]: time="2026-04-21T10:24:37.954077127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:37.954558 containerd[1463]: time="2026-04-21T10:24:37.954513797Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.185409833s" Apr 21 10:24:37.954558 containerd[1463]: time="2026-04-21T10:24:37.954548091Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:24:37.956231 containerd[1463]: time="2026-04-21T10:24:37.955870258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:24:38.353594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642186026.mount: Deactivated successfully. Apr 21 10:24:38.889936 containerd[1463]: time="2026-04-21T10:24:38.889851027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:38.890436 containerd[1463]: time="2026-04-21T10:24:38.890371798Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 21 10:24:38.891208 containerd[1463]: time="2026-04-21T10:24:38.891183013Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:38.893577 containerd[1463]: time="2026-04-21T10:24:38.893533227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:38.894435 containerd[1463]: time="2026-04-21T10:24:38.894395000Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 938.47816ms" Apr 21 10:24:38.894469 containerd[1463]: time="2026-04-21T10:24:38.894432454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:24:38.895375 containerd[1463]: time="2026-04-21T10:24:38.895343613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:24:39.298870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487992603.mount: Deactivated successfully. Apr 21 10:24:39.304551 containerd[1463]: time="2026-04-21T10:24:39.304505609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:39.305189 containerd[1463]: time="2026-04-21T10:24:39.305154146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 10:24:39.306211 containerd[1463]: time="2026-04-21T10:24:39.306186626Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:39.307983 containerd[1463]: time="2026-04-21T10:24:39.307947340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:39.308542 containerd[1463]: time="2026-04-21T10:24:39.308503591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 413.138574ms" Apr 21 10:24:39.308575 containerd[1463]: time="2026-04-21T10:24:39.308540107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:24:39.309474 containerd[1463]: time="2026-04-21T10:24:39.309434463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:24:39.722580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034626284.mount: Deactivated successfully. Apr 21 10:24:40.338964 containerd[1463]: time="2026-04-21T10:24:40.338915531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:40.339483 containerd[1463]: time="2026-04-21T10:24:40.339449356Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 21 10:24:40.340306 containerd[1463]: time="2026-04-21T10:24:40.340234688Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:40.342338 containerd[1463]: time="2026-04-21T10:24:40.342302626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:24:40.343117 containerd[1463]: time="2026-04-21T10:24:40.343085303Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.033604184s" Apr 21 10:24:40.343147 containerd[1463]: time="2026-04-21T10:24:40.343114781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:24:41.396152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:41.406524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:41.426416 systemd[1]: Reloading requested from client PID 2051 ('systemctl') (unit session-7.scope)... Apr 21 10:24:41.426455 systemd[1]: Reloading... Apr 21 10:24:41.477309 zram_generator::config[2096]: No configuration found. Apr 21 10:24:41.550161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:24:41.597155 systemd[1]: Reloading finished in 170 ms. Apr 21 10:24:41.636868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:41.638747 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:41.640080 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:24:41.640247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:41.641458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:41.746889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:41.750123 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:24:41.789120 kubelet[2140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:24:42.016494 kubelet[2140]: I0421 10:24:42.016338 2140 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:24:42.016494 kubelet[2140]: I0421 10:24:42.016407 2140 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:24:42.016494 kubelet[2140]: I0421 10:24:42.016443 2140 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:24:42.016494 kubelet[2140]: I0421 10:24:42.016448 2140 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:24:42.017165 kubelet[2140]: I0421 10:24:42.017126 2140 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:24:42.066289 kubelet[2140]: E0421 10:24:42.066188 2140 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:24:42.066891 kubelet[2140]: I0421 10:24:42.066858 2140 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:24:42.071421 kubelet[2140]: E0421 10:24:42.071388 2140 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:24:42.071512 kubelet[2140]: I0421 10:24:42.071437 2140 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:24:42.075425 kubelet[2140]: I0421 10:24:42.075391 2140 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:24:42.076162 kubelet[2140]: I0421 10:24:42.076105 2140 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:24:42.076353 kubelet[2140]: I0421 10:24:42.076141 2140 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:24:42.076353 kubelet[2140]: I0421 10:24:42.076352 2140 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:24:42.076485 kubelet[2140]: I0421 10:24:42.076360 2140 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:24:42.076485 kubelet[2140]: I0421 10:24:42.076453 2140 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:24:42.077879 kubelet[2140]: I0421 10:24:42.077828 2140 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:24:42.078113 kubelet[2140]: I0421 10:24:42.078071 2140 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:24:42.078113 kubelet[2140]: I0421 10:24:42.078102 2140 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:24:42.078153 kubelet[2140]: I0421 10:24:42.078141 2140 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:24:42.078168 kubelet[2140]: I0421 10:24:42.078159 2140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:24:42.079884 kubelet[2140]: I0421 10:24:42.079849 2140 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:24:42.081575 kubelet[2140]: I0421 10:24:42.081532 2140 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:24:42.081575 kubelet[2140]: I0421 10:24:42.081566 2140 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:24:42.081677 kubelet[2140]: W0421 10:24:42.081656 2140 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:24:42.085080 kubelet[2140]: I0421 10:24:42.085028 2140 server.go:1257] "Started kubelet" Apr 21 10:24:42.088307 kubelet[2140]: I0421 10:24:42.085467 2140 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:24:42.088307 kubelet[2140]: I0421 10:24:42.085750 2140 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:24:42.088307 kubelet[2140]: I0421 10:24:42.085887 2140 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:24:42.088307 kubelet[2140]: I0421 10:24:42.086692 2140 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:24:42.088307 kubelet[2140]: I0421 10:24:42.087317 2140 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:24:42.092241 kubelet[2140]: I0421 10:24:42.092211 2140 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:24:42.092434 kubelet[2140]: E0421 10:24:42.092411 2140 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:24:42.092504 kubelet[2140]: I0421 10:24:42.092497 2140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:24:42.093502 kubelet[2140]: E0421 10:24:42.093465 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:42.093586 kubelet[2140]: I0421 10:24:42.093552 2140 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:24:42.093855 kubelet[2140]: I0421 10:24:42.093827 2140 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:24:42.093935 kubelet[2140]: I0421 10:24:42.093905 2140 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:24:42.094586 kubelet[2140]: I0421 10:24:42.094552 2140 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:24:42.095418 kubelet[2140]: E0421 10:24:42.095183 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Apr 21 10:24:42.095464 kubelet[2140]: E0421 10:24:42.094538 2140 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8583f58489bdd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:24:42.084998109 +0000 UTC m=+0.331797530,LastTimestamp:2026-04-21 10:24:42.084998109 +0000 UTC m=+0.331797530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:24:42.096220 kubelet[2140]: I0421 10:24:42.096200 2140 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:24:42.096220 kubelet[2140]: I0421 10:24:42.096221 2140 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:24:42.109554 kubelet[2140]: I0421 10:24:42.109524 2140 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:24:42.109554 kubelet[2140]: I0421 10:24:42.109542 2140 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:24:42.109648 kubelet[2140]: I0421 10:24:42.109565 2140 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:24:42.109826 kubelet[2140]: I0421 10:24:42.109714 2140 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:24:42.111101 kubelet[2140]: I0421 10:24:42.110803 2140 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:24:42.111451 kubelet[2140]: I0421 10:24:42.111424 2140 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:24:42.111538 kubelet[2140]: I0421 10:24:42.111513 2140 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:24:42.111667 kubelet[2140]: E0421 10:24:42.111612 2140 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:24:42.112737 kubelet[2140]: I0421 10:24:42.112707 2140 policy_none.go:50] "Start" Apr 21 10:24:42.112827 kubelet[2140]: I0421 10:24:42.112821 2140 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:24:42.112918 kubelet[2140]: I0421 10:24:42.112910 2140 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:24:42.115471 kubelet[2140]: I0421 10:24:42.115425 2140 policy_none.go:44] "Start" Apr 21 10:24:42.119414 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:24:42.136741 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:24:42.139644 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:24:42.150092 kubelet[2140]: E0421 10:24:42.150012 2140 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:24:42.150387 kubelet[2140]: I0421 10:24:42.150305 2140 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:24:42.150387 kubelet[2140]: I0421 10:24:42.150322 2140 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:24:42.150788 kubelet[2140]: I0421 10:24:42.150744 2140 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:24:42.152367 kubelet[2140]: E0421 10:24:42.152317 2140 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:24:42.152367 kubelet[2140]: E0421 10:24:42.152362 2140 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:24:42.226983 systemd[1]: Created slice kubepods-burstable-pod06e15774c8d518a1a966bde848bcc7f9.slice - libcontainer container kubepods-burstable-pod06e15774c8d518a1a966bde848bcc7f9.slice. Apr 21 10:24:42.243502 kubelet[2140]: E0421 10:24:42.243436 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:42.246149 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 21 10:24:42.247296 kubelet[2140]: E0421 10:24:42.247227 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:42.248378 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 21 10:24:42.249789 kubelet[2140]: E0421 10:24:42.249751 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:42.251434 kubelet[2140]: I0421 10:24:42.251416 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:24:42.252561 kubelet[2140]: E0421 10:24:42.252491 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 21 10:24:42.296087 kubelet[2140]: E0421 10:24:42.295923 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Apr 21 10:24:42.395000 kubelet[2140]: I0421 10:24:42.394936 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:42.395000 kubelet[2140]: I0421 10:24:42.394998 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:42.395000 kubelet[2140]: I0421 10:24:42.395013 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:42.395380 kubelet[2140]: I0421 10:24:42.395026 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:42.395380 kubelet[2140]: I0421 10:24:42.395127 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:42.395380 kubelet[2140]: I0421 10:24:42.395141 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:42.395380 kubelet[2140]: I0421 10:24:42.395155 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e15774c8d518a1a966bde848bcc7f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e15774c8d518a1a966bde848bcc7f9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:42.395380 kubelet[2140]: I0421 10:24:42.395167 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e15774c8d518a1a966bde848bcc7f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e15774c8d518a1a966bde848bcc7f9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:42.395472 kubelet[2140]: I0421 10:24:42.395180 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e15774c8d518a1a966bde848bcc7f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06e15774c8d518a1a966bde848bcc7f9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:42.455463 kubelet[2140]: I0421 10:24:42.455382 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:24:42.455810 kubelet[2140]: E0421 10:24:42.455742 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 21 10:24:42.548527 kubelet[2140]: E0421 10:24:42.548402 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:42.549704 containerd[1463]: time="2026-04-21T10:24:42.549631452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06e15774c8d518a1a966bde848bcc7f9,Namespace:kube-system,Attempt:0,}" Apr 21 10:24:42.550514 kubelet[2140]: E0421 10:24:42.550235 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:42.550737 containerd[1463]: time="2026-04-21T10:24:42.550702638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 21 10:24:42.552223 kubelet[2140]: E0421 10:24:42.552185 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:42.552585 containerd[1463]: time="2026-04-21T10:24:42.552560848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 21 10:24:42.697514 kubelet[2140]: E0421 10:24:42.697401 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Apr 21 10:24:42.858473 kubelet[2140]: I0421 10:24:42.858341 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:24:42.858811 kubelet[2140]: E0421 10:24:42.858737 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Apr 21 10:24:42.944691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452794056.mount: Deactivated successfully. Apr 21 10:24:42.950084 containerd[1463]: time="2026-04-21T10:24:42.949998475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:24:42.952247 containerd[1463]: time="2026-04-21T10:24:42.952193577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:24:42.953040 containerd[1463]: time="2026-04-21T10:24:42.953000887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:24:42.954377 containerd[1463]: time="2026-04-21T10:24:42.954324284Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:24:42.955436 containerd[1463]: time="2026-04-21T10:24:42.955403820Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:24:42.955965 containerd[1463]: time="2026-04-21T10:24:42.955895889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:24:42.956236 containerd[1463]: time="2026-04-21T10:24:42.956191321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:24:42.958422 containerd[1463]: time="2026-04-21T10:24:42.958382167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:24:42.960180 containerd[1463]: time="2026-04-21T10:24:42.960119892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.413354ms" Apr 21 10:24:42.960751 containerd[1463]: time="2026-04-21T10:24:42.960719707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 409.949473ms" Apr 21 10:24:42.963244 containerd[1463]: time="2026-04-21T10:24:42.963163372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.560416ms" Apr 21 10:24:43.059328 containerd[1463]: time="2026-04-21T10:24:43.058748059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:24:43.059328 containerd[1463]: time="2026-04-21T10:24:43.058805955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:24:43.059328 containerd[1463]: time="2026-04-21T10:24:43.058813532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:43.059328 containerd[1463]: time="2026-04-21T10:24:43.058908771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:43.060741 containerd[1463]: time="2026-04-21T10:24:43.059841557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:24:43.060741 containerd[1463]: time="2026-04-21T10:24:43.059873235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:24:43.060741 containerd[1463]: time="2026-04-21T10:24:43.059916210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:43.060741 containerd[1463]: time="2026-04-21T10:24:43.060508344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:43.063834 containerd[1463]: time="2026-04-21T10:24:43.063644748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:24:43.063834 containerd[1463]: time="2026-04-21T10:24:43.063676871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:24:43.063834 containerd[1463]: time="2026-04-21T10:24:43.063691377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:43.063961 containerd[1463]: time="2026-04-21T10:24:43.063839623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:43.086441 systemd[1]: Started cri-containerd-43038fdf461fb41e52736e41cab02bfa17a8c87d1348c6b4ecec2048c9944791.scope - libcontainer container 43038fdf461fb41e52736e41cab02bfa17a8c87d1348c6b4ecec2048c9944791. Apr 21 10:24:43.090208 systemd[1]: Started cri-containerd-8d40b57cddeec02d39ddc87d2f4af023a2e7787f5c6ea541db5139235ef93024.scope - libcontainer container 8d40b57cddeec02d39ddc87d2f4af023a2e7787f5c6ea541db5139235ef93024. Apr 21 10:24:43.090993 systemd[1]: Started cri-containerd-9eaef2ac0e883fad0ac11627233aa43aa8a9c5d8826f4fdb68e823465d0e4074.scope - libcontainer container 9eaef2ac0e883fad0ac11627233aa43aa8a9c5d8826f4fdb68e823465d0e4074. Apr 21 10:24:43.125728 containerd[1463]: time="2026-04-21T10:24:43.125437296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"43038fdf461fb41e52736e41cab02bfa17a8c87d1348c6b4ecec2048c9944791\"" Apr 21 10:24:43.126893 containerd[1463]: time="2026-04-21T10:24:43.126869950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d40b57cddeec02d39ddc87d2f4af023a2e7787f5c6ea541db5139235ef93024\"" Apr 21 10:24:43.128789 kubelet[2140]: E0421 10:24:43.128726 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:43.130351 kubelet[2140]: E0421 10:24:43.130177 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:43.137214 containerd[1463]: time="2026-04-21T10:24:43.137173388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06e15774c8d518a1a966bde848bcc7f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eaef2ac0e883fad0ac11627233aa43aa8a9c5d8826f4fdb68e823465d0e4074\"" Apr 21 10:24:43.137967 containerd[1463]: time="2026-04-21T10:24:43.137931789Z" level=info msg="CreateContainer within sandbox \"43038fdf461fb41e52736e41cab02bfa17a8c87d1348c6b4ecec2048c9944791\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:24:43.138483 containerd[1463]: time="2026-04-21T10:24:43.138401000Z" level=info msg="CreateContainer within sandbox \"8d40b57cddeec02d39ddc87d2f4af023a2e7787f5c6ea541db5139235ef93024\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:24:43.140488 kubelet[2140]: E0421 10:24:43.140451 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:43.146204 containerd[1463]: time="2026-04-21T10:24:43.146163050Z" level=info msg="CreateContainer within sandbox \"9eaef2ac0e883fad0ac11627233aa43aa8a9c5d8826f4fdb68e823465d0e4074\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:24:43.162452 containerd[1463]: time="2026-04-21T10:24:43.162397581Z" level=info msg="CreateContainer within sandbox \"8d40b57cddeec02d39ddc87d2f4af023a2e7787f5c6ea541db5139235ef93024\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3cb9d77e89460e327af1a931d222aa90c3c8336e25d5faf2be035cecf6022e52\"" Apr 21 10:24:43.163534 containerd[1463]: time="2026-04-21T10:24:43.163449135Z" level=info msg="StartContainer for \"3cb9d77e89460e327af1a931d222aa90c3c8336e25d5faf2be035cecf6022e52\"" Apr 21 10:24:43.164677 containerd[1463]: time="2026-04-21T10:24:43.164617808Z" level=info msg="CreateContainer within sandbox \"43038fdf461fb41e52736e41cab02bfa17a8c87d1348c6b4ecec2048c9944791\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a2a8fcdf08679e6ed47b77f53741298a1a93dbfc360ed4fec06395517b9b6497\"" Apr 21 10:24:43.165508 containerd[1463]: time="2026-04-21T10:24:43.165480518Z" level=info msg="StartContainer for \"a2a8fcdf08679e6ed47b77f53741298a1a93dbfc360ed4fec06395517b9b6497\"" Apr 21 10:24:43.166605 containerd[1463]: time="2026-04-21T10:24:43.166575608Z" level=info msg="CreateContainer within sandbox \"9eaef2ac0e883fad0ac11627233aa43aa8a9c5d8826f4fdb68e823465d0e4074\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d340cf283439e6f5e658e65530502603ab601b5cbbf8634a723f582dc6968894\"" Apr 21 10:24:43.167229 containerd[1463]: time="2026-04-21T10:24:43.167143355Z" level=info msg="StartContainer for \"d340cf283439e6f5e658e65530502603ab601b5cbbf8634a723f582dc6968894\"" Apr 21 10:24:43.190526 systemd[1]: Started cri-containerd-a2a8fcdf08679e6ed47b77f53741298a1a93dbfc360ed4fec06395517b9b6497.scope - libcontainer container a2a8fcdf08679e6ed47b77f53741298a1a93dbfc360ed4fec06395517b9b6497. Apr 21 10:24:43.194697 systemd[1]: Started cri-containerd-3cb9d77e89460e327af1a931d222aa90c3c8336e25d5faf2be035cecf6022e52.scope - libcontainer container 3cb9d77e89460e327af1a931d222aa90c3c8336e25d5faf2be035cecf6022e52. Apr 21 10:24:43.195737 systemd[1]: Started cri-containerd-d340cf283439e6f5e658e65530502603ab601b5cbbf8634a723f582dc6968894.scope - libcontainer container d340cf283439e6f5e658e65530502603ab601b5cbbf8634a723f582dc6968894. Apr 21 10:24:43.236883 containerd[1463]: time="2026-04-21T10:24:43.236849911Z" level=info msg="StartContainer for \"3cb9d77e89460e327af1a931d222aa90c3c8336e25d5faf2be035cecf6022e52\" returns successfully" Apr 21 10:24:43.241501 containerd[1463]: time="2026-04-21T10:24:43.241353444Z" level=info msg="StartContainer for \"a2a8fcdf08679e6ed47b77f53741298a1a93dbfc360ed4fec06395517b9b6497\" returns successfully" Apr 21 10:24:43.245664 containerd[1463]: time="2026-04-21T10:24:43.245598747Z" level=info msg="StartContainer for \"d340cf283439e6f5e658e65530502603ab601b5cbbf8634a723f582dc6968894\" returns successfully" Apr 21 10:24:43.662192 kubelet[2140]: I0421 10:24:43.662127 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:24:44.119828 kubelet[2140]: E0421 10:24:44.119796 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:44.120161 kubelet[2140]: E0421 10:24:44.119981 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:44.121324 kubelet[2140]: E0421 10:24:44.121307 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:44.121521 kubelet[2140]: E0421 10:24:44.121391 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:44.122252 kubelet[2140]: E0421 10:24:44.122217 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:44.122366 kubelet[2140]: E0421 10:24:44.122347 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:44.287033 kubelet[2140]: E0421 10:24:44.286964 2140 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:24:44.393404 kubelet[2140]: I0421 10:24:44.390969 2140 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:24:44.393404 kubelet[2140]: E0421 10:24:44.390998 2140 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:24:44.402085 kubelet[2140]: E0421 10:24:44.402030 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:44.503424 kubelet[2140]: E0421 10:24:44.503331 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:44.603578 kubelet[2140]: E0421 10:24:44.603515 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:44.704434 kubelet[2140]: E0421 10:24:44.704374 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:44.805553 kubelet[2140]: E0421 10:24:44.805480 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:44.906238 kubelet[2140]: E0421 10:24:44.906187 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:45.007138 kubelet[2140]: E0421 10:24:45.006967 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:45.107715 kubelet[2140]: E0421 10:24:45.107629 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:45.125579 kubelet[2140]: E0421 10:24:45.125535 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:45.125837 kubelet[2140]: E0421 10:24:45.125626 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:24:45.125837 kubelet[2140]: E0421 10:24:45.125717 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:45.125837 kubelet[2140]: E0421 10:24:45.125728 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:45.208719 kubelet[2140]: E0421 10:24:45.208648 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:24:45.394937 kubelet[2140]: I0421 10:24:45.394763 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:45.402238 kubelet[2140]: I0421 10:24:45.402189 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:45.406033 kubelet[2140]: I0421 10:24:45.405924 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:46.081018 kubelet[2140]: I0421 10:24:46.080952 2140 apiserver.go:52] "Watching apiserver" Apr 21 10:24:46.085743 kubelet[2140]: E0421 10:24:46.085671 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:46.094943 kubelet[2140]: I0421 10:24:46.094857 2140 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:24:46.125751 kubelet[2140]: E0421 10:24:46.125699 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:46.125751 kubelet[2140]: E0421 10:24:46.125725 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:46.191923 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-7.scope)... Apr 21 10:24:46.191951 systemd[1]: Reloading... Apr 21 10:24:46.244334 zram_generator::config[2468]: No configuration found. Apr 21 10:24:46.319708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:24:46.372872 systemd[1]: Reloading finished in 180 ms. Apr 21 10:24:46.408947 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:46.427765 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:24:46.427964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:46.436510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:24:46.531963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:24:46.535220 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:24:46.569646 kubelet[2513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:24:46.577025 kubelet[2513]: I0421 10:24:46.576932 2513 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:24:46.577025 kubelet[2513]: I0421 10:24:46.576958 2513 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:24:46.577025 kubelet[2513]: I0421 10:24:46.576970 2513 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:24:46.577025 kubelet[2513]: I0421 10:24:46.576973 2513 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:24:46.578098 kubelet[2513]: I0421 10:24:46.577372 2513 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:24:46.578296 kubelet[2513]: I0421 10:24:46.578244 2513 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:24:46.581582 kubelet[2513]: I0421 10:24:46.581539 2513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:24:46.585555 kubelet[2513]: E0421 10:24:46.585503 2513 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:24:46.585597 kubelet[2513]: I0421 10:24:46.585579 2513 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:24:46.589669 kubelet[2513]: I0421 10:24:46.589613 2513 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:24:46.589883 kubelet[2513]: I0421 10:24:46.589844 2513 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:24:46.589985 kubelet[2513]: I0421 10:24:46.589873 2513 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:24:46.590100 kubelet[2513]: I0421 10:24:46.589987 2513 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:24:46.590100 kubelet[2513]: I0421 10:24:46.589993 2513 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:24:46.590100 kubelet[2513]: I0421 10:24:46.590012 2513 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:24:46.590541 kubelet[2513]: I0421 10:24:46.590494 2513 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:24:46.590705 kubelet[2513]: I0421 10:24:46.590682 2513 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:24:46.590705 kubelet[2513]: I0421 10:24:46.590700 2513 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:24:46.590770 kubelet[2513]: I0421 10:24:46.590716 2513 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:24:46.590770 kubelet[2513]: I0421 10:24:46.590723 2513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:24:46.594789 kubelet[2513]: I0421 10:24:46.594773 2513 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:24:46.597593 kubelet[2513]: I0421 10:24:46.597500 2513 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:24:46.597593 kubelet[2513]: I0421 10:24:46.597544 2513 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:24:46.603008 kubelet[2513]: I0421 10:24:46.602997 2513 server.go:1257] "Started kubelet" Apr 21 10:24:46.604064 kubelet[2513]: I0421 10:24:46.603743 2513 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:24:46.604064 kubelet[2513]: I0421 10:24:46.603795 2513 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:24:46.604064 kubelet[2513]: I0421 10:24:46.603981 2513 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:24:46.604064 kubelet[2513]: I0421 10:24:46.604014 2513 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:24:46.605001 kubelet[2513]: I0421 10:24:46.604886 2513 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:24:46.608252 kubelet[2513]: E0421 10:24:46.608171 2513 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:24:46.608416 kubelet[2513]: I0421 10:24:46.608373 2513 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:24:46.609152 kubelet[2513]: I0421 10:24:46.608960 2513 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:24:46.611293 kubelet[2513]: I0421 10:24:46.609598 2513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:24:46.611293 kubelet[2513]: I0421 10:24:46.610367 2513 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:24:46.611293 kubelet[2513]: I0421 10:24:46.610522 2513 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:24:46.611815 kubelet[2513]: I0421 10:24:46.611790 2513 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:24:46.611900 kubelet[2513]: I0421 10:24:46.611877 2513 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:24:46.613955 kubelet[2513]: I0421 10:24:46.613187 2513 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:24:46.620118 kubelet[2513]: I0421 10:24:46.620054 2513 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:24:46.621160 kubelet[2513]: I0421 10:24:46.621135 2513 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:24:46.621160 kubelet[2513]: I0421 10:24:46.621159 2513 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:24:46.621233 kubelet[2513]: I0421 10:24:46.621176 2513 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:24:46.621233 kubelet[2513]: E0421 10:24:46.621214 2513 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:24:46.639677 kubelet[2513]: I0421 10:24:46.639593 2513 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:24:46.639677 kubelet[2513]: I0421 10:24:46.639614 2513 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:24:46.639677 kubelet[2513]: I0421 10:24:46.639628 2513 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:24:46.639762 kubelet[2513]: I0421 10:24:46.639707 2513 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:24:46.639762 kubelet[2513]: I0421 10:24:46.639714 2513 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:24:46.639762 kubelet[2513]: I0421 10:24:46.639726 2513 policy_none.go:50] "Start" Apr 21 10:24:46.639762 kubelet[2513]: I0421 10:24:46.639731 2513 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:24:46.639762 kubelet[2513]: I0421 10:24:46.639737 2513 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:24:46.639836 kubelet[2513]: I0421 10:24:46.639806 2513 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:24:46.639836 kubelet[2513]: I0421 10:24:46.639812 2513 policy_none.go:44] "Start" Apr 21 10:24:46.644593 kubelet[2513]: E0421 10:24:46.644485 2513 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:24:46.644645 kubelet[2513]: I0421 10:24:46.644601 2513 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:24:46.644645 kubelet[2513]: I0421 10:24:46.644611 2513 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:24:46.644795 kubelet[2513]: I0421 10:24:46.644780 2513 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:24:46.646024 kubelet[2513]: E0421 10:24:46.645999 2513 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:24:46.722605 kubelet[2513]: I0421 10:24:46.722516 2513 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:46.722744 kubelet[2513]: I0421 10:24:46.722522 2513 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:46.722770 kubelet[2513]: I0421 10:24:46.722677 2513 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:46.729785 kubelet[2513]: E0421 10:24:46.729730 2513 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:46.730217 kubelet[2513]: E0421 10:24:46.729999 2513 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:46.730380 kubelet[2513]: E0421 10:24:46.730355 2513 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:46.751803 kubelet[2513]: I0421 10:24:46.751728 2513 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:24:46.757863 kubelet[2513]: I0421 10:24:46.757832 2513 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 21 10:24:46.757913 kubelet[2513]: I0421 10:24:46.757897 2513 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:24:46.912871 kubelet[2513]: I0421 10:24:46.912634 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:46.912871 kubelet[2513]: I0421 10:24:46.912752 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e15774c8d518a1a966bde848bcc7f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e15774c8d518a1a966bde848bcc7f9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:46.913420 kubelet[2513]: I0421 10:24:46.912812 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e15774c8d518a1a966bde848bcc7f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06e15774c8d518a1a966bde848bcc7f9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:46.913474 kubelet[2513]: I0421 10:24:46.913418 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:46.913556 kubelet[2513]: I0421 10:24:46.913513 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:46.913580 kubelet[2513]: I0421 10:24:46.913560 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:46.913682 kubelet[2513]: I0421 10:24:46.913635 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e15774c8d518a1a966bde848bcc7f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06e15774c8d518a1a966bde848bcc7f9\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:46.913705 kubelet[2513]: I0421 10:24:46.913685 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:46.913729 kubelet[2513]: I0421 10:24:46.913703 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:47.030405 kubelet[2513]: E0421 10:24:47.030322 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:47.030543 kubelet[2513]: E0421 10:24:47.030333 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:47.030543 kubelet[2513]: E0421 10:24:47.030492 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:47.194644 sudo[2556]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 10:24:47.194873 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 10:24:47.591847 kubelet[2513]: I0421 10:24:47.591792 2513 apiserver.go:52] "Watching apiserver" Apr 21 10:24:47.611487 kubelet[2513]: I0421 10:24:47.611446 2513 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:24:47.633854 kubelet[2513]: I0421 10:24:47.633824 2513 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:47.633920 kubelet[2513]: I0421 10:24:47.633895 2513 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:47.634287 kubelet[2513]: I0421 10:24:47.634240 2513 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:47.642010 kubelet[2513]: E0421 10:24:47.641294 2513 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:24:47.642010 kubelet[2513]: E0421 10:24:47.641445 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:47.642395 kubelet[2513]: E0421 10:24:47.642357 2513 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:24:47.645288 kubelet[2513]: E0421 10:24:47.642485 2513 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:24:47.645288 kubelet[2513]: E0421 10:24:47.642547 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:47.645288 kubelet[2513]: E0421 10:24:47.642580 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:47.652016 sudo[2556]: pam_unix(sudo:session): session closed for user root Apr 21 10:24:47.654511 kubelet[2513]: I0421 10:24:47.654369 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.654319188 podStartE2EDuration="2.654319188s" podCreationTimestamp="2026-04-21 10:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:24:47.654186163 +0000 UTC m=+1.115729039" watchObservedRunningTime="2026-04-21 10:24:47.654319188 +0000 UTC m=+1.115862052" Apr 21 10:24:47.669544 kubelet[2513]: I0421 10:24:47.669479 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.669468529 podStartE2EDuration="2.669468529s" podCreationTimestamp="2026-04-21 10:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:24:47.662247275 +0000 UTC m=+1.123790149" watchObservedRunningTime="2026-04-21 10:24:47.669468529 +0000 UTC m=+1.131011394" Apr 21 10:24:48.636902 kubelet[2513]: E0421 10:24:48.635808 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:48.636902 kubelet[2513]: E0421 10:24:48.635808 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:48.636902 kubelet[2513]: E0421 10:24:48.635873 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:48.648635 kubelet[2513]: I0421 10:24:48.648588 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.648577193 podStartE2EDuration="3.648577193s" podCreationTimestamp="2026-04-21 10:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:24:47.669627176 +0000 UTC m=+1.131170040" watchObservedRunningTime="2026-04-21 10:24:48.648577193 +0000 UTC m=+2.110120069" Apr 21 10:24:48.786195 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 21 10:24:48.787579 sshd[1639]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:48.790177 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:56992.service: Deactivated successfully. Apr 21 10:24:48.791449 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:24:48.791585 systemd[1]: session-7.scope: Consumed 3.693s CPU time, 158.5M memory peak, 0B memory swap peak. Apr 21 10:24:48.791967 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:24:48.792842 systemd-logind[1447]: Removed session 7. Apr 21 10:24:49.638600 kubelet[2513]: E0421 10:24:49.637824 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:49.638600 kubelet[2513]: E0421 10:24:49.638199 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:50.361583 kubelet[2513]: E0421 10:24:50.361547 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:50.638750 kubelet[2513]: E0421 10:24:50.638628 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:52.725751 kubelet[2513]: E0421 10:24:52.725715 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:53.360752 kubelet[2513]: I0421 10:24:53.360699 2513 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:24:53.361193 containerd[1463]: time="2026-04-21T10:24:53.361138925Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:24:53.361468 kubelet[2513]: I0421 10:24:53.361388 2513 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:24:54.502620 systemd[1]: Created slice kubepods-burstable-pod9137750a_f212_45a1_bafe_fd167f2dec35.slice - libcontainer container kubepods-burstable-pod9137750a_f212_45a1_bafe_fd167f2dec35.slice. Apr 21 10:24:54.507714 systemd[1]: Created slice kubepods-besteffort-podcca9f627_ea4c_4a18_9688_eed6e7d150e1.slice - libcontainer container kubepods-besteffort-podcca9f627_ea4c_4a18_9688_eed6e7d150e1.slice. Apr 21 10:24:54.567442 kubelet[2513]: I0421 10:24:54.567349 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-run\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567442 kubelet[2513]: I0421 10:24:54.567428 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-net\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567861 kubelet[2513]: I0421 10:24:54.567468 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-hubble-tls\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567861 kubelet[2513]: I0421 10:24:54.567490 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qzkv\" (UniqueName: \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-kube-api-access-8qzkv\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567861 kubelet[2513]: I0421 10:24:54.567508 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cca9f627-ea4c-4a18-9688-eed6e7d150e1-xtables-lock\") pod \"kube-proxy-6nhrf\" (UID: \"cca9f627-ea4c-4a18-9688-eed6e7d150e1\") " pod="kube-system/kube-proxy-6nhrf" Apr 21 10:24:54.567861 kubelet[2513]: I0421 10:24:54.567526 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-hostproc\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567861 kubelet[2513]: I0421 10:24:54.567536 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-etc-cni-netd\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567861 kubelet[2513]: I0421 10:24:54.567546 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-xtables-lock\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567962 kubelet[2513]: I0421 10:24:54.567584 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9137750a-f212-45a1-bafe-fd167f2dec35-clustermesh-secrets\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567962 kubelet[2513]: I0421 10:24:54.567597 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-kernel\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567962 kubelet[2513]: I0421 10:24:54.567608 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cca9f627-ea4c-4a18-9688-eed6e7d150e1-lib-modules\") pod \"kube-proxy-6nhrf\" (UID: \"cca9f627-ea4c-4a18-9688-eed6e7d150e1\") " pod="kube-system/kube-proxy-6nhrf" Apr 21 10:24:54.567962 kubelet[2513]: I0421 10:24:54.567623 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-bpf-maps\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567962 kubelet[2513]: I0421 10:24:54.567636 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-lib-modules\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.567962 kubelet[2513]: I0421 10:24:54.567647 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cca9f627-ea4c-4a18-9688-eed6e7d150e1-kube-proxy\") pod \"kube-proxy-6nhrf\" (UID: \"cca9f627-ea4c-4a18-9688-eed6e7d150e1\") " pod="kube-system/kube-proxy-6nhrf" Apr 21 10:24:54.568052 kubelet[2513]: I0421 10:24:54.567663 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-cgroup\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.568052 kubelet[2513]: I0421 10:24:54.567678 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cni-path\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.568052 kubelet[2513]: I0421 10:24:54.567692 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-config-path\") pod \"cilium-qq8sk\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " pod="kube-system/cilium-qq8sk" Apr 21 10:24:54.568052 kubelet[2513]: I0421 10:24:54.567706 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xth5p\" (UniqueName: \"kubernetes.io/projected/cca9f627-ea4c-4a18-9688-eed6e7d150e1-kube-api-access-xth5p\") pod \"kube-proxy-6nhrf\" (UID: \"cca9f627-ea4c-4a18-9688-eed6e7d150e1\") " pod="kube-system/kube-proxy-6nhrf" Apr 21 10:24:54.605782 systemd[1]: Created slice kubepods-besteffort-pod51d6c6dc_c053_492e_9dda_0506ccfcf7ff.slice - libcontainer container kubepods-besteffort-pod51d6c6dc_c053_492e_9dda_0506ccfcf7ff.slice. Apr 21 10:24:54.668659 kubelet[2513]: I0421 10:24:54.668562 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkx6\" (UniqueName: \"kubernetes.io/projected/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-kube-api-access-htkx6\") pod \"cilium-operator-78cf5644cb-nn9cs\" (UID: \"51d6c6dc-c053-492e-9dda-0506ccfcf7ff\") " pod="kube-system/cilium-operator-78cf5644cb-nn9cs" Apr 21 10:24:54.668872 kubelet[2513]: I0421 10:24:54.668762 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-cilium-config-path\") pod \"cilium-operator-78cf5644cb-nn9cs\" (UID: \"51d6c6dc-c053-492e-9dda-0506ccfcf7ff\") " pod="kube-system/cilium-operator-78cf5644cb-nn9cs" Apr 21 10:24:54.810985 kubelet[2513]: E0421 10:24:54.810830 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:54.812599 containerd[1463]: time="2026-04-21T10:24:54.812215960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qq8sk,Uid:9137750a-f212-45a1-bafe-fd167f2dec35,Namespace:kube-system,Attempt:0,}" Apr 21 10:24:54.822514 kubelet[2513]: E0421 10:24:54.822463 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:54.823298 containerd[1463]: time="2026-04-21T10:24:54.822940726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nhrf,Uid:cca9f627-ea4c-4a18-9688-eed6e7d150e1,Namespace:kube-system,Attempt:0,}" Apr 21 10:24:54.835207 containerd[1463]: time="2026-04-21T10:24:54.835078657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:24:54.835207 containerd[1463]: time="2026-04-21T10:24:54.835151077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:24:54.835207 containerd[1463]: time="2026-04-21T10:24:54.835163142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:54.835551 containerd[1463]: time="2026-04-21T10:24:54.835450324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:54.844848 containerd[1463]: time="2026-04-21T10:24:54.844484608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:24:54.844848 containerd[1463]: time="2026-04-21T10:24:54.844552812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:24:54.844848 containerd[1463]: time="2026-04-21T10:24:54.844565090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:54.844848 containerd[1463]: time="2026-04-21T10:24:54.844615180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:54.852444 systemd[1]: Started cri-containerd-d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba.scope - libcontainer container d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba. Apr 21 10:24:54.856994 systemd[1]: Started cri-containerd-a9b3b765625c97da13580bde42843ffc56ea6541fc03b4572c05cc97f6bab733.scope - libcontainer container a9b3b765625c97da13580bde42843ffc56ea6541fc03b4572c05cc97f6bab733. Apr 21 10:24:54.876418 containerd[1463]: time="2026-04-21T10:24:54.876388730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nhrf,Uid:cca9f627-ea4c-4a18-9688-eed6e7d150e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b3b765625c97da13580bde42843ffc56ea6541fc03b4572c05cc97f6bab733\"" Apr 21 10:24:54.877204 containerd[1463]: time="2026-04-21T10:24:54.877047838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qq8sk,Uid:9137750a-f212-45a1-bafe-fd167f2dec35,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\"" Apr 21 10:24:54.877897 kubelet[2513]: E0421 10:24:54.877227 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:54.878336 kubelet[2513]: E0421 10:24:54.878078 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:54.880312 containerd[1463]: time="2026-04-21T10:24:54.880044400Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 10:24:54.885142 containerd[1463]: time="2026-04-21T10:24:54.885065965Z" level=info msg="CreateContainer within sandbox \"a9b3b765625c97da13580bde42843ffc56ea6541fc03b4572c05cc97f6bab733\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:24:54.900623 containerd[1463]: time="2026-04-21T10:24:54.900578627Z" level=info msg="CreateContainer within sandbox \"a9b3b765625c97da13580bde42843ffc56ea6541fc03b4572c05cc97f6bab733\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07c78a528bf1b7c539250f8bb2f151c955eefc08cdb2a30c056119727cc0fbcc\"" Apr 21 10:24:54.901340 containerd[1463]: time="2026-04-21T10:24:54.901311768Z" level=info msg="StartContainer for \"07c78a528bf1b7c539250f8bb2f151c955eefc08cdb2a30c056119727cc0fbcc\"" Apr 21 10:24:54.911505 kubelet[2513]: E0421 10:24:54.911431 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:54.913182 containerd[1463]: time="2026-04-21T10:24:54.912321079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-nn9cs,Uid:51d6c6dc-c053-492e-9dda-0506ccfcf7ff,Namespace:kube-system,Attempt:0,}" Apr 21 10:24:54.934507 systemd[1]: Started cri-containerd-07c78a528bf1b7c539250f8bb2f151c955eefc08cdb2a30c056119727cc0fbcc.scope - libcontainer container 07c78a528bf1b7c539250f8bb2f151c955eefc08cdb2a30c056119727cc0fbcc. Apr 21 10:24:54.943143 containerd[1463]: time="2026-04-21T10:24:54.942792718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:24:54.943143 containerd[1463]: time="2026-04-21T10:24:54.942833114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:24:54.943143 containerd[1463]: time="2026-04-21T10:24:54.942844937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:54.943143 containerd[1463]: time="2026-04-21T10:24:54.942931963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:24:54.956514 containerd[1463]: time="2026-04-21T10:24:54.956476735Z" level=info msg="StartContainer for \"07c78a528bf1b7c539250f8bb2f151c955eefc08cdb2a30c056119727cc0fbcc\" returns successfully" Apr 21 10:24:54.960535 systemd[1]: Started cri-containerd-aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948.scope - libcontainer container aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948. Apr 21 10:24:54.996741 containerd[1463]: time="2026-04-21T10:24:54.996671758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-nn9cs,Uid:51d6c6dc-c053-492e-9dda-0506ccfcf7ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948\"" Apr 21 10:24:54.997601 kubelet[2513]: E0421 10:24:54.997523 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:55.650880 kubelet[2513]: E0421 10:24:55.650807 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:59.232635 kubelet[2513]: E0421 10:24:59.232575 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:24:59.245092 kubelet[2513]: I0421 10:24:59.245022 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6nhrf" podStartSLOduration=5.245011047 podStartE2EDuration="5.245011047s" podCreationTimestamp="2026-04-21 10:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:24:55.668454842 +0000 UTC m=+9.129997717" watchObservedRunningTime="2026-04-21 10:24:59.245011047 +0000 UTC m=+12.706553931" Apr 21 10:25:00.136206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819245186.mount: Deactivated successfully. Apr 21 10:25:00.368081 kubelet[2513]: E0421 10:25:00.368011 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:02.597471 containerd[1463]: time="2026-04-21T10:25:02.597366614Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:02.598224 containerd[1463]: time="2026-04-21T10:25:02.598130150Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 10:25:02.599351 containerd[1463]: time="2026-04-21T10:25:02.599322439Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:02.600617 containerd[1463]: time="2026-04-21T10:25:02.600583165Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.72051136s" Apr 21 10:25:02.600678 containerd[1463]: time="2026-04-21T10:25:02.600626499Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 10:25:02.601839 containerd[1463]: time="2026-04-21T10:25:02.601671915Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 10:25:02.605353 containerd[1463]: time="2026-04-21T10:25:02.605322040Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:25:02.617888 containerd[1463]: time="2026-04-21T10:25:02.617818648Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\"" Apr 21 10:25:02.618647 containerd[1463]: time="2026-04-21T10:25:02.618517735Z" level=info msg="StartContainer for \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\"" Apr 21 10:25:02.640961 systemd[1]: run-containerd-runc-k8s.io-c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd-runc.sdFNj3.mount: Deactivated successfully. Apr 21 10:25:02.653540 systemd[1]: Started cri-containerd-c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd.scope - libcontainer container c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd. Apr 21 10:25:02.681804 containerd[1463]: time="2026-04-21T10:25:02.681757950Z" level=info msg="StartContainer for \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\" returns successfully" Apr 21 10:25:02.686759 systemd[1]: cri-containerd-c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd.scope: Deactivated successfully. Apr 21 10:25:02.734216 kubelet[2513]: E0421 10:25:02.734087 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:02.757160 containerd[1463]: time="2026-04-21T10:25:02.756927749Z" level=info msg="shim disconnected" id=c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd namespace=k8s.io Apr 21 10:25:02.757160 containerd[1463]: time="2026-04-21T10:25:02.756990155Z" level=warning msg="cleaning up after shim disconnected" id=c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd namespace=k8s.io Apr 21 10:25:02.757160 containerd[1463]: time="2026-04-21T10:25:02.756997390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:03.615627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd-rootfs.mount: Deactivated successfully. Apr 21 10:25:03.670469 kubelet[2513]: E0421 10:25:03.670415 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:03.677823 containerd[1463]: time="2026-04-21T10:25:03.677753125Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:25:03.692913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152521707.mount: Deactivated successfully. Apr 21 10:25:03.694813 containerd[1463]: time="2026-04-21T10:25:03.694744782Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\"" Apr 21 10:25:03.696097 containerd[1463]: time="2026-04-21T10:25:03.695936990Z" level=info msg="StartContainer for \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\"" Apr 21 10:25:03.734498 systemd[1]: Started cri-containerd-74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9.scope - libcontainer container 74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9. Apr 21 10:25:03.757524 containerd[1463]: time="2026-04-21T10:25:03.757440111Z" level=info msg="StartContainer for \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\" returns successfully" Apr 21 10:25:03.768357 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:25:03.768795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:25:03.768860 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:25:03.775034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:25:03.775435 systemd[1]: cri-containerd-74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9.scope: Deactivated successfully. Apr 21 10:25:03.802640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:25:03.804663 containerd[1463]: time="2026-04-21T10:25:03.804555868Z" level=info msg="shim disconnected" id=74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9 namespace=k8s.io Apr 21 10:25:03.804789 containerd[1463]: time="2026-04-21T10:25:03.804668601Z" level=warning msg="cleaning up after shim disconnected" id=74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9 namespace=k8s.io Apr 21 10:25:03.804789 containerd[1463]: time="2026-04-21T10:25:03.804678579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:03.819686 containerd[1463]: time="2026-04-21T10:25:03.819581725Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:25:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:25:04.614528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9-rootfs.mount: Deactivated successfully. Apr 21 10:25:04.673819 kubelet[2513]: E0421 10:25:04.673745 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:04.680322 containerd[1463]: time="2026-04-21T10:25:04.679806046Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:25:04.699358 containerd[1463]: time="2026-04-21T10:25:04.699246906Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\"" Apr 21 10:25:04.703377 containerd[1463]: time="2026-04-21T10:25:04.702102663Z" level=info msg="StartContainer for \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\"" Apr 21 10:25:04.730424 systemd[1]: Started cri-containerd-b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb.scope - libcontainer container b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb. Apr 21 10:25:04.755091 containerd[1463]: time="2026-04-21T10:25:04.754982958Z" level=info msg="StartContainer for \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\" returns successfully" Apr 21 10:25:04.757744 systemd[1]: cri-containerd-b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb.scope: Deactivated successfully. Apr 21 10:25:04.799830 containerd[1463]: time="2026-04-21T10:25:04.799653802Z" level=info msg="shim disconnected" id=b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb namespace=k8s.io Apr 21 10:25:04.799830 containerd[1463]: time="2026-04-21T10:25:04.799739466Z" level=warning msg="cleaning up after shim disconnected" id=b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb namespace=k8s.io Apr 21 10:25:04.799830 containerd[1463]: time="2026-04-21T10:25:04.799747045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:04.984742 containerd[1463]: time="2026-04-21T10:25:04.984682023Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:04.985347 containerd[1463]: time="2026-04-21T10:25:04.985300356Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 10:25:04.986508 containerd[1463]: time="2026-04-21T10:25:04.986467272Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:25:04.987523 containerd[1463]: time="2026-04-21T10:25:04.987455058Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.385759376s" Apr 21 10:25:04.987557 containerd[1463]: time="2026-04-21T10:25:04.987521090Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 10:25:04.994851 containerd[1463]: time="2026-04-21T10:25:04.994802557Z" level=info msg="CreateContainer within sandbox \"aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 10:25:05.008675 containerd[1463]: time="2026-04-21T10:25:05.008599230Z" level=info msg="CreateContainer within sandbox \"aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\"" Apr 21 10:25:05.009850 containerd[1463]: time="2026-04-21T10:25:05.009809716Z" level=info msg="StartContainer for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\"" Apr 21 10:25:05.033455 systemd[1]: Started cri-containerd-73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18.scope - libcontainer container 73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18. Apr 21 10:25:05.055051 containerd[1463]: time="2026-04-21T10:25:05.055000415Z" level=info msg="StartContainer for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" returns successfully" Apr 21 10:25:05.615366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb-rootfs.mount: Deactivated successfully. Apr 21 10:25:05.680635 kubelet[2513]: E0421 10:25:05.680225 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:05.686344 kubelet[2513]: E0421 10:25:05.685343 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:05.692645 containerd[1463]: time="2026-04-21T10:25:05.692457293Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:25:05.708898 containerd[1463]: time="2026-04-21T10:25:05.708741763Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\"" Apr 21 10:25:05.715873 containerd[1463]: time="2026-04-21T10:25:05.712869491Z" level=info msg="StartContainer for \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\"" Apr 21 10:25:05.739220 kubelet[2513]: I0421 10:25:05.739066 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-nn9cs" podStartSLOduration=1.747692565 podStartE2EDuration="11.739018295s" podCreationTimestamp="2026-04-21 10:24:54 +0000 UTC" firstStartedPulling="2026-04-21 10:24:54.998197307 +0000 UTC m=+8.459740171" lastFinishedPulling="2026-04-21 10:25:04.989523036 +0000 UTC m=+18.451065901" observedRunningTime="2026-04-21 10:25:05.738897255 +0000 UTC m=+19.200440130" watchObservedRunningTime="2026-04-21 10:25:05.739018295 +0000 UTC m=+19.200561159" Apr 21 10:25:05.772612 systemd[1]: Started cri-containerd-46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe.scope - libcontainer container 46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe. Apr 21 10:25:05.797911 systemd[1]: cri-containerd-46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe.scope: Deactivated successfully. Apr 21 10:25:05.800243 containerd[1463]: time="2026-04-21T10:25:05.800205910Z" level=info msg="StartContainer for \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\" returns successfully" Apr 21 10:25:05.857655 containerd[1463]: time="2026-04-21T10:25:05.857212302Z" level=info msg="shim disconnected" id=46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe namespace=k8s.io Apr 21 10:25:05.857965 containerd[1463]: time="2026-04-21T10:25:05.857683442Z" level=warning msg="cleaning up after shim disconnected" id=46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe namespace=k8s.io Apr 21 10:25:05.857965 containerd[1463]: time="2026-04-21T10:25:05.857695603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:06.615148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe-rootfs.mount: Deactivated successfully. Apr 21 10:25:06.689519 kubelet[2513]: E0421 10:25:06.689463 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:06.689847 kubelet[2513]: E0421 10:25:06.689475 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:06.695351 containerd[1463]: time="2026-04-21T10:25:06.695241730Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:25:06.719561 containerd[1463]: time="2026-04-21T10:25:06.719499922Z" level=info msg="CreateContainer within sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\"" Apr 21 10:25:06.720309 containerd[1463]: time="2026-04-21T10:25:06.720250331Z" level=info msg="StartContainer for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\"" Apr 21 10:25:06.748430 systemd[1]: Started cri-containerd-2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee.scope - libcontainer container 2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee. Apr 21 10:25:06.778217 containerd[1463]: time="2026-04-21T10:25:06.778148437Z" level=info msg="StartContainer for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" returns successfully" Apr 21 10:25:06.922440 kubelet[2513]: I0421 10:25:06.922212 2513 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:25:06.963377 systemd[1]: Created slice kubepods-burstable-pod4b6c9157_5040_4ac8_892d_724aefeb2698.slice - libcontainer container kubepods-burstable-pod4b6c9157_5040_4ac8_892d_724aefeb2698.slice. Apr 21 10:25:06.969189 systemd[1]: Created slice kubepods-burstable-pod22cf80b5_c5b1_48a3_96fd_e7d3d87c2579.slice - libcontainer container kubepods-burstable-pod22cf80b5_c5b1_48a3_96fd_e7d3d87c2579.slice. Apr 21 10:25:07.033820 kubelet[2513]: I0421 10:25:07.033767 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22cf80b5-c5b1-48a3-96fd-e7d3d87c2579-config-volume\") pod \"coredns-7d764666f9-j2mcb\" (UID: \"22cf80b5-c5b1-48a3-96fd-e7d3d87c2579\") " pod="kube-system/coredns-7d764666f9-j2mcb" Apr 21 10:25:07.033820 kubelet[2513]: I0421 10:25:07.033815 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsctk\" (UniqueName: \"kubernetes.io/projected/4b6c9157-5040-4ac8-892d-724aefeb2698-kube-api-access-nsctk\") pod \"coredns-7d764666f9-ds65n\" (UID: \"4b6c9157-5040-4ac8-892d-724aefeb2698\") " pod="kube-system/coredns-7d764666f9-ds65n" Apr 21 10:25:07.033820 kubelet[2513]: I0421 10:25:07.033830 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6c9157-5040-4ac8-892d-724aefeb2698-config-volume\") pod \"coredns-7d764666f9-ds65n\" (UID: \"4b6c9157-5040-4ac8-892d-724aefeb2698\") " pod="kube-system/coredns-7d764666f9-ds65n" Apr 21 10:25:07.033820 kubelet[2513]: I0421 10:25:07.033841 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rgtl\" (UniqueName: \"kubernetes.io/projected/22cf80b5-c5b1-48a3-96fd-e7d3d87c2579-kube-api-access-6rgtl\") pod \"coredns-7d764666f9-j2mcb\" (UID: \"22cf80b5-c5b1-48a3-96fd-e7d3d87c2579\") " pod="kube-system/coredns-7d764666f9-j2mcb" Apr 21 10:25:07.280464 kubelet[2513]: E0421 10:25:07.280406 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:07.283916 kubelet[2513]: E0421 10:25:07.283874 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:07.314197 containerd[1463]: time="2026-04-21T10:25:07.314098063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j2mcb,Uid:22cf80b5-c5b1-48a3-96fd-e7d3d87c2579,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:07.314429 containerd[1463]: time="2026-04-21T10:25:07.314347423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ds65n,Uid:4b6c9157-5040-4ac8-892d-724aefeb2698,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:07.696528 kubelet[2513]: E0421 10:25:07.696488 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:07.713620 kubelet[2513]: I0421 10:25:07.713486 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-qq8sk" podStartSLOduration=1.903057457 podStartE2EDuration="13.71347246s" podCreationTimestamp="2026-04-21 10:24:54 +0000 UTC" firstStartedPulling="2026-04-21 10:24:54.879783069 +0000 UTC m=+8.341325934" lastFinishedPulling="2026-04-21 10:25:06.690198063 +0000 UTC m=+20.151740937" observedRunningTime="2026-04-21 10:25:07.712741293 +0000 UTC m=+21.174284169" watchObservedRunningTime="2026-04-21 10:25:07.71347246 +0000 UTC m=+21.175015335" Apr 21 10:25:08.523196 update_engine[1451]: I20260421 10:25:08.523056 1451 update_attempter.cc:509] Updating boot flags... Apr 21 10:25:08.542351 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3362) Apr 21 10:25:08.700878 kubelet[2513]: E0421 10:25:08.700816 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:08.733946 systemd-networkd[1389]: cilium_host: Link UP Apr 21 10:25:08.734032 systemd-networkd[1389]: cilium_net: Link UP Apr 21 10:25:08.734158 systemd-networkd[1389]: cilium_net: Gained carrier Apr 21 10:25:08.734251 systemd-networkd[1389]: cilium_host: Gained carrier Apr 21 10:25:08.735469 systemd-networkd[1389]: cilium_net: Gained IPv6LL Apr 21 10:25:08.822685 systemd-networkd[1389]: cilium_vxlan: Link UP Apr 21 10:25:08.822691 systemd-networkd[1389]: cilium_vxlan: Gained carrier Apr 21 10:25:09.020365 kernel: NET: Registered PF_ALG protocol family Apr 21 10:25:09.467460 systemd-networkd[1389]: cilium_host: Gained IPv6LL Apr 21 10:25:09.556659 systemd-networkd[1389]: lxc_health: Link UP Apr 21 10:25:09.564323 systemd-networkd[1389]: lxc_health: Gained carrier Apr 21 10:25:09.703965 kubelet[2513]: E0421 10:25:09.703899 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:09.896579 systemd-networkd[1389]: lxcbf4608922ffd: Link UP Apr 21 10:25:09.919426 kernel: eth0: renamed from tmpf47b6 Apr 21 10:25:09.931325 kernel: eth0: renamed from tmp203b8 Apr 21 10:25:09.942750 systemd-networkd[1389]: lxc0f9a16f74df5: Link UP Apr 21 10:25:09.943923 systemd-networkd[1389]: lxc0f9a16f74df5: Gained carrier Apr 21 10:25:09.944104 systemd-networkd[1389]: lxcbf4608922ffd: Gained carrier Apr 21 10:25:10.490577 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL Apr 21 10:25:10.619929 systemd-networkd[1389]: lxc_health: Gained IPv6LL Apr 21 10:25:10.648632 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:45808.service - OpenSSH per-connection server daemon (10.0.0.1:45808). Apr 21 10:25:10.698407 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 45808 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:10.700470 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:10.706519 systemd-logind[1447]: New session 8 of user core. Apr 21 10:25:10.712655 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:25:10.810875 kubelet[2513]: E0421 10:25:10.810713 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:10.854992 sshd[3739]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:10.858414 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:45808.service: Deactivated successfully. Apr 21 10:25:10.860427 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:25:10.861144 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:25:10.862220 systemd-logind[1447]: Removed session 8. Apr 21 10:25:11.194495 systemd-networkd[1389]: lxc0f9a16f74df5: Gained IPv6LL Apr 21 10:25:11.713018 kubelet[2513]: E0421 10:25:11.712905 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:11.834536 systemd-networkd[1389]: lxcbf4608922ffd: Gained IPv6LL Apr 21 10:25:12.715895 kubelet[2513]: E0421 10:25:12.715840 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:13.360493 containerd[1463]: time="2026-04-21T10:25:13.359176026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:13.360493 containerd[1463]: time="2026-04-21T10:25:13.360057766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:13.360493 containerd[1463]: time="2026-04-21T10:25:13.360157647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:13.360794 containerd[1463]: time="2026-04-21T10:25:13.360768461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:13.361516 containerd[1463]: time="2026-04-21T10:25:13.361307109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:13.361516 containerd[1463]: time="2026-04-21T10:25:13.361373117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:13.361516 containerd[1463]: time="2026-04-21T10:25:13.361456655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:13.361745 containerd[1463]: time="2026-04-21T10:25:13.361527480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:13.391459 systemd[1]: Started cri-containerd-203b84634b3314720aeac05b1cd41db80a22cd2feb2523e6f06dc919c0d58700.scope - libcontainer container 203b84634b3314720aeac05b1cd41db80a22cd2feb2523e6f06dc919c0d58700. Apr 21 10:25:13.392961 systemd[1]: Started cri-containerd-f47b6dcd2d4f039dff9a9ae6e192df32ae7a3ddf86fd86e2de059854bc441fcd.scope - libcontainer container f47b6dcd2d4f039dff9a9ae6e192df32ae7a3ddf86fd86e2de059854bc441fcd. Apr 21 10:25:13.404803 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:25:13.407565 systemd-resolved[1393]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:25:13.437637 containerd[1463]: time="2026-04-21T10:25:13.437552350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ds65n,Uid:4b6c9157-5040-4ac8-892d-724aefeb2698,Namespace:kube-system,Attempt:0,} returns sandbox id \"203b84634b3314720aeac05b1cd41db80a22cd2feb2523e6f06dc919c0d58700\"" Apr 21 10:25:13.439315 kubelet[2513]: E0421 10:25:13.439072 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:13.444631 containerd[1463]: time="2026-04-21T10:25:13.444583059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-j2mcb,Uid:22cf80b5-c5b1-48a3-96fd-e7d3d87c2579,Namespace:kube-system,Attempt:0,} returns sandbox id \"f47b6dcd2d4f039dff9a9ae6e192df32ae7a3ddf86fd86e2de059854bc441fcd\"" Apr 21 10:25:13.445524 kubelet[2513]: E0421 10:25:13.445481 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:13.448090 containerd[1463]: time="2026-04-21T10:25:13.448012451Z" level=info msg="CreateContainer within sandbox \"203b84634b3314720aeac05b1cd41db80a22cd2feb2523e6f06dc919c0d58700\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:25:13.455869 containerd[1463]: time="2026-04-21T10:25:13.455809053Z" level=info msg="CreateContainer within sandbox \"f47b6dcd2d4f039dff9a9ae6e192df32ae7a3ddf86fd86e2de059854bc441fcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:25:13.476785 containerd[1463]: time="2026-04-21T10:25:13.476697683Z" level=info msg="CreateContainer within sandbox \"f47b6dcd2d4f039dff9a9ae6e192df32ae7a3ddf86fd86e2de059854bc441fcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6ad2d359ff1e23e6d7750a5fee1c4a03ca0f14aab172a88947891b7a246a6a3\"" Apr 21 10:25:13.478385 containerd[1463]: time="2026-04-21T10:25:13.478331110Z" level=info msg="StartContainer for \"f6ad2d359ff1e23e6d7750a5fee1c4a03ca0f14aab172a88947891b7a246a6a3\"" Apr 21 10:25:13.493409 containerd[1463]: time="2026-04-21T10:25:13.493295319Z" level=info msg="CreateContainer within sandbox \"203b84634b3314720aeac05b1cd41db80a22cd2feb2523e6f06dc919c0d58700\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a409ac2eccfa6775a47f0f14827b7f09c8c2b6d32380d08c0b92a0323d340774\"" Apr 21 10:25:13.494594 containerd[1463]: time="2026-04-21T10:25:13.494573520Z" level=info msg="StartContainer for \"a409ac2eccfa6775a47f0f14827b7f09c8c2b6d32380d08c0b92a0323d340774\"" Apr 21 10:25:13.507546 systemd[1]: Started cri-containerd-f6ad2d359ff1e23e6d7750a5fee1c4a03ca0f14aab172a88947891b7a246a6a3.scope - libcontainer container f6ad2d359ff1e23e6d7750a5fee1c4a03ca0f14aab172a88947891b7a246a6a3. Apr 21 10:25:13.526483 systemd[1]: Started cri-containerd-a409ac2eccfa6775a47f0f14827b7f09c8c2b6d32380d08c0b92a0323d340774.scope - libcontainer container a409ac2eccfa6775a47f0f14827b7f09c8c2b6d32380d08c0b92a0323d340774. Apr 21 10:25:13.538247 containerd[1463]: time="2026-04-21T10:25:13.538204837Z" level=info msg="StartContainer for \"f6ad2d359ff1e23e6d7750a5fee1c4a03ca0f14aab172a88947891b7a246a6a3\" returns successfully" Apr 21 10:25:13.555651 containerd[1463]: time="2026-04-21T10:25:13.555467843Z" level=info msg="StartContainer for \"a409ac2eccfa6775a47f0f14827b7f09c8c2b6d32380d08c0b92a0323d340774\" returns successfully" Apr 21 10:25:13.725427 kubelet[2513]: E0421 10:25:13.725170 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:13.729540 kubelet[2513]: E0421 10:25:13.729467 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:13.736742 kubelet[2513]: I0421 10:25:13.736458 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-j2mcb" podStartSLOduration=19.73644934 podStartE2EDuration="19.73644934s" podCreationTimestamp="2026-04-21 10:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:13.736210094 +0000 UTC m=+27.197752970" watchObservedRunningTime="2026-04-21 10:25:13.73644934 +0000 UTC m=+27.197992215" Apr 21 10:25:13.744437 kubelet[2513]: I0421 10:25:13.744362 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ds65n" podStartSLOduration=19.744353235 podStartE2EDuration="19.744353235s" podCreationTimestamp="2026-04-21 10:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:13.742866323 +0000 UTC m=+27.204409198" watchObservedRunningTime="2026-04-21 10:25:13.744353235 +0000 UTC m=+27.205896110" Apr 21 10:25:14.733018 kubelet[2513]: E0421 10:25:14.732955 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:14.733018 kubelet[2513]: E0421 10:25:14.732982 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:15.736998 kubelet[2513]: E0421 10:25:15.736916 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:15.736998 kubelet[2513]: E0421 10:25:15.736928 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:15.869580 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:46460.service - OpenSSH per-connection server daemon (10.0.0.1:46460). Apr 21 10:25:15.928990 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 46460 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:15.930715 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:15.935937 systemd-logind[1447]: New session 9 of user core. Apr 21 10:25:15.945643 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:25:16.072002 sshd[3938]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:16.075636 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:46460.service: Deactivated successfully. Apr 21 10:25:16.077191 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:25:16.078372 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:25:16.079288 systemd-logind[1447]: Removed session 9. Apr 21 10:25:21.089713 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:46466.service - OpenSSH per-connection server daemon (10.0.0.1:46466). Apr 21 10:25:21.126070 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 46466 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:21.128179 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:21.134498 systemd-logind[1447]: New session 10 of user core. Apr 21 10:25:21.154486 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:25:21.262770 sshd[3956]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:21.267741 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:46466.service: Deactivated successfully. Apr 21 10:25:21.269388 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:25:21.269929 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:25:21.271034 systemd-logind[1447]: Removed session 10. Apr 21 10:25:26.280097 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:50816.service - OpenSSH per-connection server daemon (10.0.0.1:50816). Apr 21 10:25:26.322734 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 50816 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:26.324155 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:26.328709 systemd-logind[1447]: New session 11 of user core. Apr 21 10:25:26.339564 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:25:26.456211 sshd[3974]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:26.467800 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:50816.service: Deactivated successfully. Apr 21 10:25:26.470232 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:25:26.471434 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:25:26.485526 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:50818.service - OpenSSH per-connection server daemon (10.0.0.1:50818). Apr 21 10:25:26.486761 systemd-logind[1447]: Removed session 11. Apr 21 10:25:26.515217 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 50818 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:26.516678 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:26.521347 systemd-logind[1447]: New session 12 of user core. Apr 21 10:25:26.535459 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:25:26.684125 sshd[3989]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:26.691399 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:50818.service: Deactivated successfully. Apr 21 10:25:26.694835 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:25:26.697237 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:25:26.704436 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:50834.service - OpenSSH per-connection server daemon (10.0.0.1:50834). Apr 21 10:25:26.710106 systemd-logind[1447]: Removed session 12. Apr 21 10:25:26.780209 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 50834 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:26.781341 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:26.785450 systemd-logind[1447]: New session 13 of user core. Apr 21 10:25:26.793429 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:25:26.913106 sshd[4002]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:26.916147 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:50834.service: Deactivated successfully. Apr 21 10:25:26.917562 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:25:26.918116 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:25:26.919097 systemd-logind[1447]: Removed session 13. Apr 21 10:25:31.925905 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). Apr 21 10:25:31.956637 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:31.958051 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:31.961881 systemd-logind[1447]: New session 14 of user core. Apr 21 10:25:31.971007 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:25:32.079234 sshd[4017]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:32.082222 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:50846.service: Deactivated successfully. Apr 21 10:25:32.083656 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:25:32.084774 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:25:32.085936 systemd-logind[1447]: Removed session 14. Apr 21 10:25:37.101768 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:49980.service - OpenSSH per-connection server daemon (10.0.0.1:49980). Apr 21 10:25:37.138771 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 49980 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:37.139985 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:37.144161 systemd-logind[1447]: New session 15 of user core. Apr 21 10:25:37.151750 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:25:37.257577 sshd[4032]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:37.271366 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:49980.service: Deactivated successfully. Apr 21 10:25:37.272742 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:25:37.274019 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:25:37.283787 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:49994.service - OpenSSH per-connection server daemon (10.0.0.1:49994). Apr 21 10:25:37.284795 systemd-logind[1447]: Removed session 15. Apr 21 10:25:37.312826 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 49994 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:37.314071 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:37.318332 systemd-logind[1447]: New session 16 of user core. Apr 21 10:25:37.327434 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:25:37.514942 sshd[4047]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:37.526515 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:49994.service: Deactivated successfully. Apr 21 10:25:37.527769 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:25:37.528822 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:25:37.533702 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:50008.service - OpenSSH per-connection server daemon (10.0.0.1:50008). Apr 21 10:25:37.534337 systemd-logind[1447]: Removed session 16. Apr 21 10:25:37.562982 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 50008 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:37.564399 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:37.568475 systemd-logind[1447]: New session 17 of user core. Apr 21 10:25:37.579440 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:25:38.043808 sshd[4060]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:38.051438 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:50008.service: Deactivated successfully. Apr 21 10:25:38.053560 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:25:38.055126 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:25:38.065736 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:50010.service - OpenSSH per-connection server daemon (10.0.0.1:50010). Apr 21 10:25:38.067671 systemd-logind[1447]: Removed session 17. Apr 21 10:25:38.102123 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 50010 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:38.103471 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:38.107435 systemd-logind[1447]: New session 18 of user core. Apr 21 10:25:38.122107 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:25:38.330079 sshd[4078]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:38.341307 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:50010.service: Deactivated successfully. Apr 21 10:25:38.342664 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:25:38.344403 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:25:38.359557 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:50012.service - OpenSSH per-connection server daemon (10.0.0.1:50012). Apr 21 10:25:38.360804 systemd-logind[1447]: Removed session 18. Apr 21 10:25:38.389330 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 50012 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:38.390837 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:38.400520 systemd-logind[1447]: New session 19 of user core. Apr 21 10:25:38.412973 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:25:38.523425 sshd[4090]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:38.526349 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:50012.service: Deactivated successfully. Apr 21 10:25:38.527731 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:25:38.528329 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:25:38.529166 systemd-logind[1447]: Removed session 19. Apr 21 10:25:43.535801 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:50022.service - OpenSSH per-connection server daemon (10.0.0.1:50022). Apr 21 10:25:43.568737 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 50022 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:43.569996 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:43.573998 systemd-logind[1447]: New session 20 of user core. Apr 21 10:25:43.580451 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:25:43.682844 sshd[4108]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:43.686074 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:50022.service: Deactivated successfully. Apr 21 10:25:43.688090 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:25:43.689209 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:25:43.690396 systemd-logind[1447]: Removed session 20. Apr 21 10:25:48.693704 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:57696.service - OpenSSH per-connection server daemon (10.0.0.1:57696). Apr 21 10:25:48.725577 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 57696 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:48.726894 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:48.732071 systemd-logind[1447]: New session 21 of user core. Apr 21 10:25:48.744009 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:25:48.852234 sshd[4125]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:48.860983 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:57696.service: Deactivated successfully. Apr 21 10:25:48.863184 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:25:48.864532 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:25:48.880495 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:57704.service - OpenSSH per-connection server daemon (10.0.0.1:57704). Apr 21 10:25:48.881237 systemd-logind[1447]: Removed session 21. Apr 21 10:25:48.910123 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 57704 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:48.911483 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:48.915658 systemd-logind[1447]: New session 22 of user core. Apr 21 10:25:48.926112 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:25:50.400533 containerd[1463]: time="2026-04-21T10:25:50.400461621Z" level=info msg="StopContainer for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" with timeout 30 (s)" Apr 21 10:25:50.400975 containerd[1463]: time="2026-04-21T10:25:50.400806908Z" level=info msg="Stop container \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" with signal terminated" Apr 21 10:25:50.410206 systemd[1]: run-containerd-runc-k8s.io-2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee-runc.HyoGAq.mount: Deactivated successfully. Apr 21 10:25:50.416884 systemd[1]: cri-containerd-73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18.scope: Deactivated successfully. Apr 21 10:25:50.427327 containerd[1463]: time="2026-04-21T10:25:50.425851868Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:25:50.430834 containerd[1463]: time="2026-04-21T10:25:50.430814493Z" level=info msg="StopContainer for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" with timeout 2 (s)" Apr 21 10:25:50.431439 containerd[1463]: time="2026-04-21T10:25:50.431375787Z" level=info msg="Stop container \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" with signal terminated" Apr 21 10:25:50.434518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18-rootfs.mount: Deactivated successfully. Apr 21 10:25:50.440822 systemd-networkd[1389]: lxc_health: Link DOWN Apr 21 10:25:50.440828 systemd-networkd[1389]: lxc_health: Lost carrier Apr 21 10:25:50.445801 containerd[1463]: time="2026-04-21T10:25:50.445688695Z" level=info msg="shim disconnected" id=73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18 namespace=k8s.io Apr 21 10:25:50.445801 containerd[1463]: time="2026-04-21T10:25:50.445756463Z" level=warning msg="cleaning up after shim disconnected" id=73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18 namespace=k8s.io Apr 21 10:25:50.445801 containerd[1463]: time="2026-04-21T10:25:50.445764225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:50.462434 containerd[1463]: time="2026-04-21T10:25:50.462132946Z" level=info msg="StopContainer for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" returns successfully" Apr 21 10:25:50.465567 containerd[1463]: time="2026-04-21T10:25:50.465449920Z" level=info msg="StopPodSandbox for \"aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948\"" Apr 21 10:25:50.465929 containerd[1463]: time="2026-04-21T10:25:50.465903198Z" level=info msg="Container to stop \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:25:50.470898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948-shm.mount: Deactivated successfully. Apr 21 10:25:50.472682 systemd[1]: cri-containerd-2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee.scope: Deactivated successfully. Apr 21 10:25:50.473615 systemd[1]: cri-containerd-2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee.scope: Consumed 6.389s CPU time. Apr 21 10:25:50.482673 systemd[1]: cri-containerd-aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948.scope: Deactivated successfully. Apr 21 10:25:50.496412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee-rootfs.mount: Deactivated successfully. Apr 21 10:25:50.502883 containerd[1463]: time="2026-04-21T10:25:50.502817790Z" level=info msg="shim disconnected" id=2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee namespace=k8s.io Apr 21 10:25:50.502883 containerd[1463]: time="2026-04-21T10:25:50.502873803Z" level=warning msg="cleaning up after shim disconnected" id=2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee namespace=k8s.io Apr 21 10:25:50.502883 containerd[1463]: time="2026-04-21T10:25:50.502881422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:50.503756 containerd[1463]: time="2026-04-21T10:25:50.503413402Z" level=info msg="shim disconnected" id=aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948 namespace=k8s.io Apr 21 10:25:50.503756 containerd[1463]: time="2026-04-21T10:25:50.503482792Z" level=warning msg="cleaning up after shim disconnected" id=aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948 namespace=k8s.io Apr 21 10:25:50.503756 containerd[1463]: time="2026-04-21T10:25:50.503491732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:50.518419 containerd[1463]: time="2026-04-21T10:25:50.518362021Z" level=info msg="TearDown network for sandbox \"aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948\" successfully" Apr 21 10:25:50.518582 containerd[1463]: time="2026-04-21T10:25:50.518512406Z" level=info msg="StopPodSandbox for \"aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948\" returns successfully" Apr 21 10:25:50.521680 containerd[1463]: time="2026-04-21T10:25:50.521618077Z" level=info msg="StopContainer for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" returns successfully" Apr 21 10:25:50.522043 containerd[1463]: time="2026-04-21T10:25:50.522017329Z" level=info msg="StopPodSandbox for \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\"" Apr 21 10:25:50.522090 containerd[1463]: time="2026-04-21T10:25:50.522075773Z" level=info msg="Container to stop \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:25:50.522090 containerd[1463]: time="2026-04-21T10:25:50.522085624Z" level=info msg="Container to stop \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:25:50.522124 containerd[1463]: time="2026-04-21T10:25:50.522093033Z" level=info msg="Container to stop \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:25:50.522124 containerd[1463]: time="2026-04-21T10:25:50.522099783Z" level=info msg="Container to stop \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:25:50.522124 containerd[1463]: time="2026-04-21T10:25:50.522106659Z" level=info msg="Container to stop \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:25:50.528006 systemd[1]: cri-containerd-d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba.scope: Deactivated successfully. Apr 21 10:25:50.558310 containerd[1463]: time="2026-04-21T10:25:50.557463213Z" level=info msg="shim disconnected" id=d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba namespace=k8s.io Apr 21 10:25:50.558310 containerd[1463]: time="2026-04-21T10:25:50.557792309Z" level=warning msg="cleaning up after shim disconnected" id=d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba namespace=k8s.io Apr 21 10:25:50.558310 containerd[1463]: time="2026-04-21T10:25:50.557870860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:50.581324 containerd[1463]: time="2026-04-21T10:25:50.581226736Z" level=info msg="TearDown network for sandbox \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" successfully" Apr 21 10:25:50.581439 containerd[1463]: time="2026-04-21T10:25:50.581347522Z" level=info msg="StopPodSandbox for \"d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba\" returns successfully" Apr 21 10:25:50.606801 kubelet[2513]: I0421 10:25:50.606730 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-cilium-config-path\") pod \"51d6c6dc-c053-492e-9dda-0506ccfcf7ff\" (UID: \"51d6c6dc-c053-492e-9dda-0506ccfcf7ff\") " Apr 21 10:25:50.606801 kubelet[2513]: I0421 10:25:50.606820 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-kube-api-access-htkx6\" (UniqueName: \"kubernetes.io/projected/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-kube-api-access-htkx6\") pod \"51d6c6dc-c053-492e-9dda-0506ccfcf7ff\" (UID: \"51d6c6dc-c053-492e-9dda-0506ccfcf7ff\") " Apr 21 10:25:50.610487 kubelet[2513]: I0421 10:25:50.610345 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-cilium-config-path" pod "51d6c6dc-c053-492e-9dda-0506ccfcf7ff" (UID: "51d6c6dc-c053-492e-9dda-0506ccfcf7ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:25:50.611023 kubelet[2513]: I0421 10:25:50.610960 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-kube-api-access-htkx6" pod "51d6c6dc-c053-492e-9dda-0506ccfcf7ff" (UID: "51d6c6dc-c053-492e-9dda-0506ccfcf7ff"). InnerVolumeSpecName "kube-api-access-htkx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:25:50.628585 systemd[1]: Removed slice kubepods-besteffort-pod51d6c6dc_c053_492e_9dda_0506ccfcf7ff.slice - libcontainer container kubepods-besteffort-pod51d6c6dc_c053_492e_9dda_0506ccfcf7ff.slice. Apr 21 10:25:50.708154 kubelet[2513]: I0421 10:25:50.708066 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-run\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-run\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708154 kubelet[2513]: I0421 10:25:50.708158 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-etc-cni-netd\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708366 kubelet[2513]: I0421 10:25:50.708174 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-lib-modules\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-lib-modules\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708366 kubelet[2513]: I0421 10:25:50.708223 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-config-path\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708366 kubelet[2513]: I0421 10:25:50.708241 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-net\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708366 kubelet[2513]: I0421 10:25:50.708328 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-kube-api-access-8qzkv\" (UniqueName: \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-kube-api-access-8qzkv\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708366 kubelet[2513]: I0421 10:25:50.708346 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-xtables-lock\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708546 kubelet[2513]: I0421 10:25:50.708359 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-hostproc\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-hostproc\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708546 kubelet[2513]: I0421 10:25:50.708373 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-bpf-maps\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708546 kubelet[2513]: I0421 10:25:50.708379 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-run" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708546 kubelet[2513]: I0421 10:25:50.708401 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-bpf-maps" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708546 kubelet[2513]: I0421 10:25:50.708432 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-net" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708631 kubelet[2513]: I0421 10:25:50.708433 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-xtables-lock" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708631 kubelet[2513]: I0421 10:25:50.708442 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-hostproc" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708631 kubelet[2513]: I0421 10:25:50.708453 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-lib-modules" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708631 kubelet[2513]: I0421 10:25:50.708492 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-etc-cni-netd" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.708631 kubelet[2513]: I0421 10:25:50.708596 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-kernel\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708712 kubelet[2513]: I0421 10:25:50.708623 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9137750a-f212-45a1-bafe-fd167f2dec35-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9137750a-f212-45a1-bafe-fd167f2dec35-clustermesh-secrets\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708712 kubelet[2513]: I0421 10:25:50.708640 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cni-path\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cni-path\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708712 kubelet[2513]: I0421 10:25:50.708658 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-cgroup\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708712 kubelet[2513]: I0421 10:25:50.708674 2513 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-hubble-tls\" (UniqueName: \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-hubble-tls\") pod \"9137750a-f212-45a1-bafe-fd167f2dec35\" (UID: \"9137750a-f212-45a1-bafe-fd167f2dec35\") " Apr 21 10:25:50.708776 kubelet[2513]: I0421 10:25:50.708739 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708776 kubelet[2513]: I0421 10:25:50.708749 2513 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708776 kubelet[2513]: I0421 10:25:50.708756 2513 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708776 kubelet[2513]: I0421 10:25:50.708762 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708776 kubelet[2513]: I0421 10:25:50.708769 2513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708776 kubelet[2513]: I0421 10:25:50.708776 2513 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708870 kubelet[2513]: I0421 10:25:50.708784 2513 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708870 kubelet[2513]: I0421 10:25:50.708790 2513 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.708870 kubelet[2513]: I0421 10:25:50.708796 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-htkx6\" (UniqueName: \"kubernetes.io/projected/51d6c6dc-c053-492e-9dda-0506ccfcf7ff-kube-api-access-htkx6\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.709073 kubelet[2513]: I0421 10:25:50.709039 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cni-path" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.710120 kubelet[2513]: I0421 10:25:50.710055 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-cgroup" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.710120 kubelet[2513]: I0421 10:25:50.710080 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-kernel" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:25:50.710206 kubelet[2513]: I0421 10:25:50.710150 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-config-path" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:25:50.711749 kubelet[2513]: I0421 10:25:50.711691 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9137750a-f212-45a1-bafe-fd167f2dec35-clustermesh-secrets" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:25:50.711749 kubelet[2513]: I0421 10:25:50.711699 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-hubble-tls" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:25:50.711883 kubelet[2513]: I0421 10:25:50.711847 2513 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-kube-api-access-8qzkv" pod "9137750a-f212-45a1-bafe-fd167f2dec35" (UID: "9137750a-f212-45a1-bafe-fd167f2dec35"). InnerVolumeSpecName "kube-api-access-8qzkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809714 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809785 2513 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809793 2513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9137750a-f212-45a1-bafe-fd167f2dec35-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809800 2513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8qzkv\" (UniqueName: \"kubernetes.io/projected/9137750a-f212-45a1-bafe-fd167f2dec35-kube-api-access-8qzkv\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809807 2513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809814 2513 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9137750a-f212-45a1-bafe-fd167f2dec35-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.809826 kubelet[2513]: I0421 10:25:50.809819 2513 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9137750a-f212-45a1-bafe-fd167f2dec35-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:25:50.865768 kubelet[2513]: I0421 10:25:50.865710 2513 scope.go:122] "RemoveContainer" containerID="73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18" Apr 21 10:25:50.867157 containerd[1463]: time="2026-04-21T10:25:50.867111105Z" level=info msg="RemoveContainer for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\"" Apr 21 10:25:50.874762 systemd[1]: Removed slice kubepods-burstable-pod9137750a_f212_45a1_bafe_fd167f2dec35.slice - libcontainer container kubepods-burstable-pod9137750a_f212_45a1_bafe_fd167f2dec35.slice. Apr 21 10:25:50.875547 systemd[1]: kubepods-burstable-pod9137750a_f212_45a1_bafe_fd167f2dec35.slice: Consumed 6.468s CPU time. Apr 21 10:25:50.876318 containerd[1463]: time="2026-04-21T10:25:50.876200486Z" level=info msg="RemoveContainer for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" returns successfully" Apr 21 10:25:50.876691 kubelet[2513]: I0421 10:25:50.876614 2513 scope.go:122] "RemoveContainer" containerID="73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18" Apr 21 10:25:50.881425 containerd[1463]: time="2026-04-21T10:25:50.881250461Z" level=error msg="ContainerStatus for \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\": not found" Apr 21 10:25:50.887879 kubelet[2513]: E0421 10:25:50.887828 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\": not found" containerID="73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18" Apr 21 10:25:50.887948 kubelet[2513]: I0421 10:25:50.887890 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18"} err="failed to get container status \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\": rpc error: code = NotFound desc = an error occurred when try to find container \"73832caaa16d5808619812690852c1ee50cd42120fcaee22070688a6a6385c18\": not found" Apr 21 10:25:50.887948 kubelet[2513]: I0421 10:25:50.887933 2513 scope.go:122] "RemoveContainer" containerID="2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee" Apr 21 10:25:50.889042 containerd[1463]: time="2026-04-21T10:25:50.889008987Z" level=info msg="RemoveContainer for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\"" Apr 21 10:25:50.892998 containerd[1463]: time="2026-04-21T10:25:50.892929824Z" level=info msg="RemoveContainer for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" returns successfully" Apr 21 10:25:50.893515 kubelet[2513]: I0421 10:25:50.893488 2513 scope.go:122] "RemoveContainer" containerID="46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe" Apr 21 10:25:50.895624 containerd[1463]: time="2026-04-21T10:25:50.895515850Z" level=info msg="RemoveContainer for \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\"" Apr 21 10:25:50.900081 containerd[1463]: time="2026-04-21T10:25:50.900006074Z" level=info msg="RemoveContainer for \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\" returns successfully" Apr 21 10:25:50.900534 kubelet[2513]: I0421 10:25:50.900496 2513 scope.go:122] "RemoveContainer" containerID="b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb" Apr 21 10:25:50.901809 containerd[1463]: time="2026-04-21T10:25:50.901779802Z" level=info msg="RemoveContainer for \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\"" Apr 21 10:25:50.905971 containerd[1463]: time="2026-04-21T10:25:50.905911348Z" level=info msg="RemoveContainer for \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\" returns successfully" Apr 21 10:25:50.906747 kubelet[2513]: I0421 10:25:50.906697 2513 scope.go:122] "RemoveContainer" containerID="74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9" Apr 21 10:25:50.907914 containerd[1463]: time="2026-04-21T10:25:50.907884124Z" level=info msg="RemoveContainer for \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\"" Apr 21 10:25:50.910947 containerd[1463]: time="2026-04-21T10:25:50.910876973Z" level=info msg="RemoveContainer for \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\" returns successfully" Apr 21 10:25:50.911090 kubelet[2513]: I0421 10:25:50.911065 2513 scope.go:122] "RemoveContainer" containerID="c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd" Apr 21 10:25:50.912163 containerd[1463]: time="2026-04-21T10:25:50.912134297Z" level=info msg="RemoveContainer for \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\"" Apr 21 10:25:50.916453 containerd[1463]: time="2026-04-21T10:25:50.916362813Z" level=info msg="RemoveContainer for \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\" returns successfully" Apr 21 10:25:50.917986 kubelet[2513]: I0421 10:25:50.917244 2513 scope.go:122] "RemoveContainer" containerID="2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee" Apr 21 10:25:50.918516 containerd[1463]: time="2026-04-21T10:25:50.918451893Z" level=error msg="ContainerStatus for \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\": not found" Apr 21 10:25:50.919489 kubelet[2513]: E0421 10:25:50.919454 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\": not found" containerID="2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee" Apr 21 10:25:50.919904 kubelet[2513]: I0421 10:25:50.919827 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee"} err="failed to get container status \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"2247d6c264b850523f046d4e5c33edd0f2e5f07a3280d6cb18b132cb893ed1ee\": not found" Apr 21 10:25:50.919904 kubelet[2513]: I0421 10:25:50.919868 2513 scope.go:122] "RemoveContainer" containerID="46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe" Apr 21 10:25:50.920404 containerd[1463]: time="2026-04-21T10:25:50.920366271Z" level=error msg="ContainerStatus for \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\": not found" Apr 21 10:25:50.920614 kubelet[2513]: E0421 10:25:50.920520 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\": not found" containerID="46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe" Apr 21 10:25:50.920614 kubelet[2513]: I0421 10:25:50.920596 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe"} err="failed to get container status \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"46afb7a1f9c9085bc4847934e5851f98532b41408b385478012db4ec795b97fe\": not found" Apr 21 10:25:50.920614 kubelet[2513]: I0421 10:25:50.920608 2513 scope.go:122] "RemoveContainer" containerID="b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb" Apr 21 10:25:50.920877 containerd[1463]: time="2026-04-21T10:25:50.920728867Z" level=error msg="ContainerStatus for \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\": not found" Apr 21 10:25:50.920922 kubelet[2513]: E0421 10:25:50.920880 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\": not found" containerID="b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb" Apr 21 10:25:50.920922 kubelet[2513]: I0421 10:25:50.920914 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb"} err="failed to get container status \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7cf31c8451999ced8b5ae66ddf0c1e379780290bf5998dc262a5ed3ac6011fb\": not found" Apr 21 10:25:50.920990 kubelet[2513]: I0421 10:25:50.920924 2513 scope.go:122] "RemoveContainer" containerID="74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9" Apr 21 10:25:50.921851 containerd[1463]: time="2026-04-21T10:25:50.921217020Z" level=error msg="ContainerStatus for \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\": not found" Apr 21 10:25:50.921851 containerd[1463]: time="2026-04-21T10:25:50.921816442Z" level=error msg="ContainerStatus for \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\": not found" Apr 21 10:25:50.921915 kubelet[2513]: E0421 10:25:50.921597 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\": not found" containerID="74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9" Apr 21 10:25:50.921915 kubelet[2513]: I0421 10:25:50.921616 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9"} err="failed to get container status \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"74f5677409bf05843ffab999fefa946fa1d0d9d00c07f56cbd584723c0731fd9\": not found" Apr 21 10:25:50.921915 kubelet[2513]: I0421 10:25:50.921627 2513 scope.go:122] "RemoveContainer" containerID="c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd" Apr 21 10:25:50.922001 kubelet[2513]: E0421 10:25:50.921933 2513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\": not found" containerID="c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd" Apr 21 10:25:50.922111 kubelet[2513]: I0421 10:25:50.921999 2513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd"} err="failed to get container status \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c473ca917f0ba73a5cd5f4a1b5d7ac575f5e5700dbb95f9ba0eb030d04f541cd\": not found" Apr 21 10:25:51.407409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aad8cc8bd7260428591245d2714092a10b6dcfdfdd31b2f703a19d35faa33948-rootfs.mount: Deactivated successfully. Apr 21 10:25:51.407511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba-rootfs.mount: Deactivated successfully. Apr 21 10:25:51.407575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2ce650c75c764873fd46cfb06a2e4fa9aab20e6a98a01324bb6d618c43adeba-shm.mount: Deactivated successfully. Apr 21 10:25:51.407620 systemd[1]: var-lib-kubelet-pods-51d6c6dc\x2dc053\x2d492e\x2d9dda\x2d0506ccfcf7ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhtkx6.mount: Deactivated successfully. Apr 21 10:25:51.407708 systemd[1]: var-lib-kubelet-pods-9137750a\x2df212\x2d45a1\x2dbafe\x2dfd167f2dec35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8qzkv.mount: Deactivated successfully. Apr 21 10:25:51.407776 systemd[1]: var-lib-kubelet-pods-9137750a\x2df212\x2d45a1\x2dbafe\x2dfd167f2dec35-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 10:25:51.407816 systemd[1]: var-lib-kubelet-pods-9137750a\x2df212\x2d45a1\x2dbafe\x2dfd167f2dec35-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 10:25:51.665078 kubelet[2513]: E0421 10:25:51.664917 2513 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:25:52.359154 sshd[4139]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:52.365195 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:57704.service: Deactivated successfully. Apr 21 10:25:52.366527 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:25:52.367664 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:25:52.368760 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:57718.service - OpenSSH per-connection server daemon (10.0.0.1:57718). Apr 21 10:25:52.369431 systemd-logind[1447]: Removed session 22. Apr 21 10:25:52.402624 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 57718 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:52.403964 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:52.407530 systemd-logind[1447]: New session 23 of user core. Apr 21 10:25:52.414424 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:25:52.625602 kubelet[2513]: I0421 10:25:52.625496 2513 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="51d6c6dc-c053-492e-9dda-0506ccfcf7ff" path="/var/lib/kubelet/pods/51d6c6dc-c053-492e-9dda-0506ccfcf7ff/volumes" Apr 21 10:25:52.625878 kubelet[2513]: I0421 10:25:52.625804 2513 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9137750a-f212-45a1-bafe-fd167f2dec35" path="/var/lib/kubelet/pods/9137750a-f212-45a1-bafe-fd167f2dec35/volumes" Apr 21 10:25:53.159945 sshd[4301]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:53.166977 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:57718.service: Deactivated successfully. Apr 21 10:25:53.169042 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:25:53.170918 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:25:53.184304 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:57734.service - OpenSSH per-connection server daemon (10.0.0.1:57734). Apr 21 10:25:53.189659 systemd-logind[1447]: Removed session 23. Apr 21 10:25:53.197546 systemd[1]: Created slice kubepods-burstable-pod7a0726c8_f01d_4f2b_ac33_b19f9b029652.slice - libcontainer container kubepods-burstable-pod7a0726c8_f01d_4f2b_ac33_b19f9b029652.slice. Apr 21 10:25:53.227366 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 57734 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:53.232321 kubelet[2513]: I0421 10:25:53.229675 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-cni-path\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232321 kubelet[2513]: I0421 10:25:53.229713 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a0726c8-f01d-4f2b-ac33-b19f9b029652-clustermesh-secrets\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232321 kubelet[2513]: I0421 10:25:53.229736 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a0726c8-f01d-4f2b-ac33-b19f9b029652-cilium-ipsec-secrets\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232321 kubelet[2513]: I0421 10:25:53.230145 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-bpf-maps\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232321 kubelet[2513]: I0421 10:25:53.230162 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-cilium-cgroup\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232321 kubelet[2513]: I0421 10:25:53.230174 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a0726c8-f01d-4f2b-ac33-b19f9b029652-hubble-tls\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.230374 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:53.232873 kubelet[2513]: I0421 10:25:53.230186 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-etc-cni-netd\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232873 kubelet[2513]: I0421 10:25:53.230197 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-cilium-run\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232873 kubelet[2513]: I0421 10:25:53.230209 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-hostproc\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232873 kubelet[2513]: I0421 10:25:53.230224 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-xtables-lock\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232873 kubelet[2513]: I0421 10:25:53.230237 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a0726c8-f01d-4f2b-ac33-b19f9b029652-cilium-config-path\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232873 kubelet[2513]: I0421 10:25:53.230248 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-host-proc-sys-net\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232970 kubelet[2513]: I0421 10:25:53.230340 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-lib-modules\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232970 kubelet[2513]: I0421 10:25:53.230355 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a0726c8-f01d-4f2b-ac33-b19f9b029652-host-proc-sys-kernel\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.232970 kubelet[2513]: I0421 10:25:53.230367 2513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrmk\" (UniqueName: \"kubernetes.io/projected/7a0726c8-f01d-4f2b-ac33-b19f9b029652-kube-api-access-7zrmk\") pod \"cilium-478q6\" (UID: \"7a0726c8-f01d-4f2b-ac33-b19f9b029652\") " pod="kube-system/cilium-478q6" Apr 21 10:25:53.238085 systemd-logind[1447]: New session 24 of user core. Apr 21 10:25:53.244728 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:25:53.300209 sshd[4314]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:53.309502 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:57734.service: Deactivated successfully. Apr 21 10:25:53.310834 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:25:53.312192 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:25:53.313529 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:57740.service - OpenSSH per-connection server daemon (10.0.0.1:57740). Apr 21 10:25:53.314488 systemd-logind[1447]: Removed session 24. Apr 21 10:25:53.346727 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 57740 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:25:53.349778 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:53.358174 systemd-logind[1447]: New session 25 of user core. Apr 21 10:25:53.368531 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 10:25:53.516449 kubelet[2513]: E0421 10:25:53.516384 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:53.517785 containerd[1463]: time="2026-04-21T10:25:53.517503397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-478q6,Uid:7a0726c8-f01d-4f2b-ac33-b19f9b029652,Namespace:kube-system,Attempt:0,}" Apr 21 10:25:53.557432 containerd[1463]: time="2026-04-21T10:25:53.556875325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:25:53.557432 containerd[1463]: time="2026-04-21T10:25:53.557035793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:25:53.557432 containerd[1463]: time="2026-04-21T10:25:53.557049030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:53.557718 containerd[1463]: time="2026-04-21T10:25:53.557519767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:25:53.580536 systemd[1]: Started cri-containerd-35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0.scope - libcontainer container 35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0. Apr 21 10:25:53.605516 containerd[1463]: time="2026-04-21T10:25:53.605475442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-478q6,Uid:7a0726c8-f01d-4f2b-ac33-b19f9b029652,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\"" Apr 21 10:25:53.607076 kubelet[2513]: E0421 10:25:53.607012 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:53.614601 containerd[1463]: time="2026-04-21T10:25:53.614505840Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:25:53.637138 containerd[1463]: time="2026-04-21T10:25:53.637050250Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3\"" Apr 21 10:25:53.637934 containerd[1463]: time="2026-04-21T10:25:53.637913981Z" level=info msg="StartContainer for \"9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3\"" Apr 21 10:25:53.670513 systemd[1]: Started cri-containerd-9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3.scope - libcontainer container 9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3. Apr 21 10:25:53.692578 containerd[1463]: time="2026-04-21T10:25:53.692493631Z" level=info msg="StartContainer for \"9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3\" returns successfully" Apr 21 10:25:53.702667 systemd[1]: cri-containerd-9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3.scope: Deactivated successfully. Apr 21 10:25:53.736025 containerd[1463]: time="2026-04-21T10:25:53.735878508Z" level=info msg="shim disconnected" id=9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3 namespace=k8s.io Apr 21 10:25:53.736025 containerd[1463]: time="2026-04-21T10:25:53.736001139Z" level=warning msg="cleaning up after shim disconnected" id=9e44ab52ae2b0787e6c68c409ebdad111bfd2ada846fe508d55ba9b7a41022b3 namespace=k8s.io Apr 21 10:25:53.736025 containerd[1463]: time="2026-04-21T10:25:53.736012869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:53.882387 kubelet[2513]: E0421 10:25:53.882148 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:53.890181 containerd[1463]: time="2026-04-21T10:25:53.889977462Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:25:53.902316 containerd[1463]: time="2026-04-21T10:25:53.902149382Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220\"" Apr 21 10:25:53.902859 containerd[1463]: time="2026-04-21T10:25:53.902696579Z" level=info msg="StartContainer for \"8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220\"" Apr 21 10:25:53.942969 systemd[1]: Started cri-containerd-8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220.scope - libcontainer container 8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220. Apr 21 10:25:53.983223 containerd[1463]: time="2026-04-21T10:25:53.983124105Z" level=info msg="StartContainer for \"8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220\" returns successfully" Apr 21 10:25:53.993916 systemd[1]: cri-containerd-8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220.scope: Deactivated successfully. Apr 21 10:25:54.035521 containerd[1463]: time="2026-04-21T10:25:54.035411281Z" level=info msg="shim disconnected" id=8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220 namespace=k8s.io Apr 21 10:25:54.035521 containerd[1463]: time="2026-04-21T10:25:54.035481449Z" level=warning msg="cleaning up after shim disconnected" id=8c93fb52fb53d98742bd9c9bf87ff51ab862977517f83ec4f57538e2f8f89220 namespace=k8s.io Apr 21 10:25:54.035521 containerd[1463]: time="2026-04-21T10:25:54.035488927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:54.888156 kubelet[2513]: E0421 10:25:54.888072 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:54.893206 containerd[1463]: time="2026-04-21T10:25:54.892996305Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:25:54.917029 containerd[1463]: time="2026-04-21T10:25:54.916975636Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19\"" Apr 21 10:25:54.918566 containerd[1463]: time="2026-04-21T10:25:54.917667163Z" level=info msg="StartContainer for \"884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19\"" Apr 21 10:25:54.954467 systemd[1]: Started cri-containerd-884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19.scope - libcontainer container 884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19. Apr 21 10:25:54.979058 containerd[1463]: time="2026-04-21T10:25:54.979030673Z" level=info msg="StartContainer for \"884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19\" returns successfully" Apr 21 10:25:54.979510 systemd[1]: cri-containerd-884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19.scope: Deactivated successfully. Apr 21 10:25:55.017746 containerd[1463]: time="2026-04-21T10:25:55.017560898Z" level=info msg="shim disconnected" id=884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19 namespace=k8s.io Apr 21 10:25:55.017746 containerd[1463]: time="2026-04-21T10:25:55.017728964Z" level=warning msg="cleaning up after shim disconnected" id=884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19 namespace=k8s.io Apr 21 10:25:55.017746 containerd[1463]: time="2026-04-21T10:25:55.017736735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:55.338321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-884825629d5e97ebc0d2bdbada6d7bc7a62a3d984892a1f63537286e468d2b19-rootfs.mount: Deactivated successfully. Apr 21 10:25:55.894782 kubelet[2513]: E0421 10:25:55.894702 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:55.899342 containerd[1463]: time="2026-04-21T10:25:55.899094940Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:25:55.910785 containerd[1463]: time="2026-04-21T10:25:55.910671880Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98\"" Apr 21 10:25:55.911705 containerd[1463]: time="2026-04-21T10:25:55.911524598Z" level=info msg="StartContainer for \"f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98\"" Apr 21 10:25:55.939465 systemd[1]: Started cri-containerd-f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98.scope - libcontainer container f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98. Apr 21 10:25:55.957818 systemd[1]: cri-containerd-f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98.scope: Deactivated successfully. Apr 21 10:25:55.961739 containerd[1463]: time="2026-04-21T10:25:55.961704290Z" level=info msg="StartContainer for \"f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98\" returns successfully" Apr 21 10:25:55.977532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98-rootfs.mount: Deactivated successfully. Apr 21 10:25:55.981025 containerd[1463]: time="2026-04-21T10:25:55.980933361Z" level=info msg="shim disconnected" id=f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98 namespace=k8s.io Apr 21 10:25:55.981025 containerd[1463]: time="2026-04-21T10:25:55.980992008Z" level=warning msg="cleaning up after shim disconnected" id=f62595e3839ef939c86185b54dbeef0618c7d5790e1fb5d42b43148b541e0a98 namespace=k8s.io Apr 21 10:25:55.981025 containerd[1463]: time="2026-04-21T10:25:55.980999057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:25:56.666784 kubelet[2513]: E0421 10:25:56.666729 2513 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:25:56.901435 kubelet[2513]: E0421 10:25:56.901347 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:56.907085 containerd[1463]: time="2026-04-21T10:25:56.907001288Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:25:56.924429 containerd[1463]: time="2026-04-21T10:25:56.924203513Z" level=info msg="CreateContainer within sandbox \"35f87b912aa2016797a4d87276df84efb9bf239e4555d465661f15f55227e6b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"29518d59489381250c7a607db49ed3df3c0a36f8cea8861f945250c19a8cf285\"" Apr 21 10:25:56.925882 containerd[1463]: time="2026-04-21T10:25:56.925764730Z" level=info msg="StartContainer for \"29518d59489381250c7a607db49ed3df3c0a36f8cea8861f945250c19a8cf285\"" Apr 21 10:25:56.957593 systemd[1]: Started cri-containerd-29518d59489381250c7a607db49ed3df3c0a36f8cea8861f945250c19a8cf285.scope - libcontainer container 29518d59489381250c7a607db49ed3df3c0a36f8cea8861f945250c19a8cf285. Apr 21 10:25:56.984089 containerd[1463]: time="2026-04-21T10:25:56.983994691Z" level=info msg="StartContainer for \"29518d59489381250c7a607db49ed3df3c0a36f8cea8861f945250c19a8cf285\" returns successfully" Apr 21 10:25:57.224328 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 21 10:25:57.908246 kubelet[2513]: E0421 10:25:57.908159 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:25:57.927349 kubelet[2513]: I0421 10:25:57.927178 2513 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-478q6" podStartSLOduration=4.927162721 podStartE2EDuration="4.927162721s" podCreationTimestamp="2026-04-21 10:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:25:57.927115014 +0000 UTC m=+71.388657880" watchObservedRunningTime="2026-04-21 10:25:57.927162721 +0000 UTC m=+71.388705586" Apr 21 10:25:58.305600 kubelet[2513]: I0421 10:25:58.305493 2513 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T10:25:58Z","lastTransitionTime":"2026-04-21T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 10:25:59.514966 kubelet[2513]: E0421 10:25:59.514888 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:26:00.180176 systemd-networkd[1389]: lxc_health: Link UP Apr 21 10:26:00.190804 systemd-networkd[1389]: lxc_health: Gained carrier Apr 21 10:26:01.516627 kubelet[2513]: E0421 10:26:01.516249 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:26:01.627635 systemd-networkd[1389]: lxc_health: Gained IPv6LL Apr 21 10:26:01.760721 systemd[1]: run-containerd-runc-k8s.io-29518d59489381250c7a607db49ed3df3c0a36f8cea8861f945250c19a8cf285-runc.Nldykg.mount: Deactivated successfully. Apr 21 10:26:01.918346 kubelet[2513]: E0421 10:26:01.918153 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:26:02.920202 kubelet[2513]: E0421 10:26:02.920141 2513 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:26:05.997214 sshd[4322]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:05.999879 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:57740.service: Deactivated successfully. Apr 21 10:26:06.001166 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 10:26:06.001809 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Apr 21 10:26:06.002964 systemd-logind[1447]: Removed session 25.