Apr 21 10:41:09.090466 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:41:09.090494 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:41:09.090509 kernel: BIOS-provided physical RAM map: Apr 21 10:41:09.090517 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:41:09.090525 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 10:41:09.090533 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 10:41:09.090543 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 10:41:09.090552 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 10:41:09.090560 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 21 10:41:09.090568 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 21 10:41:09.090578 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 21 10:41:09.090586 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 21 10:41:09.090595 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 21 10:41:09.090603 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 21 10:41:09.090646 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 21 10:41:09.090656 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 10:41:09.090668 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 21 10:41:09.090677 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 21 10:41:09.090686 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 10:41:09.090695 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:41:09.090704 kernel: NX (Execute Disable) protection: active Apr 21 10:41:09.090713 kernel: APIC: Static calls initialized Apr 21 10:41:09.090722 kernel: efi: EFI v2.7 by EDK II Apr 21 10:41:09.090731 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 21 10:41:09.090741 kernel: SMBIOS 2.8 present. Apr 21 10:41:09.090750 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 21 10:41:09.090759 kernel: Hypervisor detected: KVM Apr 21 10:41:09.090770 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:41:09.090779 kernel: kvm-clock: using sched offset of 4919824048 cycles Apr 21 10:41:09.090788 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:41:09.090798 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:41:09.090808 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:41:09.090817 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:41:09.090827 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 21 10:41:09.090836 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:41:09.090846 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:41:09.090857 kernel: Using GB pages for direct mapping Apr 21 10:41:09.090867 kernel: Secure boot disabled Apr 21 10:41:09.090876 kernel: ACPI: Early table checksum verification disabled Apr 21 10:41:09.090886 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 10:41:09.090899 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:41:09.090910 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:41:09.090920 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:41:09.090931 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 10:41:09.090942 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:41:09.090952 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:41:09.090978 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:41:09.090988 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:41:09.090998 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:41:09.091008 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 10:41:09.091020 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 10:41:09.091030 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 10:41:09.091040 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 10:41:09.091050 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 10:41:09.091060 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 10:41:09.091069 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 10:41:09.091079 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 10:41:09.091089 kernel: No NUMA configuration found Apr 21 10:41:09.091099 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 21 10:41:09.091111 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 21 10:41:09.091121 kernel: Zone ranges: Apr 21 10:41:09.091131 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:41:09.091141 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 21 10:41:09.091151 kernel: Normal empty Apr 21 10:41:09.091161 kernel: Movable zone start for each node Apr 21 10:41:09.091171 kernel: Early memory node ranges Apr 21 10:41:09.091181 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:41:09.091191 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 10:41:09.091201 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 10:41:09.091213 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 21 10:41:09.091223 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 21 10:41:09.091233 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 21 10:41:09.091243 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 21 10:41:09.091253 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:41:09.091263 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:41:09.091273 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 10:41:09.091283 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:41:09.091293 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 21 10:41:09.091305 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:41:09.091315 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 21 10:41:09.091325 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:41:09.091335 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:41:09.091345 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:41:09.091355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:41:09.091365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:41:09.091375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:41:09.091385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:41:09.091395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:41:09.091407 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:41:09.091417 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:41:09.091427 kernel: TSC deadline timer available Apr 21 10:41:09.091437 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:41:09.091447 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:41:09.091457 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:41:09.091467 kernel: kvm-guest: setup PV sched yield Apr 21 10:41:09.091477 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 21 10:41:09.091487 kernel: Booting paravirtualized kernel on KVM Apr 21 10:41:09.091499 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:41:09.091509 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:41:09.091519 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:41:09.091530 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:41:09.091539 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:41:09.091549 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:41:09.091559 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:41:09.091570 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:41:09.091583 kernel: random: crng init done Apr 21 10:41:09.091592 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:41:09.091603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:41:09.091637 kernel: Fallback order for Node 0: 0 Apr 21 10:41:09.091647 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 21 10:41:09.091657 kernel: Policy zone: DMA32 Apr 21 10:41:09.091667 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:41:09.091678 kernel: Memory: 2394672K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 172124K reserved, 0K cma-reserved) Apr 21 10:41:09.091688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:41:09.091700 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:41:09.091710 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:41:09.091720 kernel: Dynamic Preempt: voluntary Apr 21 10:41:09.091731 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:41:09.091753 kernel: rcu: RCU event tracing is enabled. Apr 21 10:41:09.091766 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:41:09.091777 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:41:09.091788 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:41:09.091799 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:41:09.091809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:41:09.091820 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:41:09.091833 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:41:09.091844 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:41:09.091855 kernel: Console: colour dummy device 80x25 Apr 21 10:41:09.091865 kernel: printk: console [ttyS0] enabled Apr 21 10:41:09.091876 kernel: ACPI: Core revision 20230628 Apr 21 10:41:09.091887 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:41:09.091899 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:41:09.091910 kernel: x2apic enabled Apr 21 10:41:09.091921 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:41:09.091932 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:41:09.091943 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:41:09.091973 kernel: kvm-guest: setup PV IPIs Apr 21 10:41:09.091984 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:41:09.091995 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:41:09.092005 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:41:09.092018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:41:09.092029 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:41:09.092039 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:41:09.092050 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:41:09.092060 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:41:09.092071 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:41:09.092081 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:41:09.092092 kernel: RETBleed: Vulnerable Apr 21 10:41:09.092103 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:41:09.092115 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:41:09.092125 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:41:09.092136 kernel: active return thunk: its_return_thunk Apr 21 10:41:09.092146 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:41:09.092156 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:41:09.092167 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:41:09.092178 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:41:09.092188 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:41:09.092198 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:41:09.092211 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:41:09.092221 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:41:09.092232 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:41:09.092242 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:41:09.092253 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:41:09.092263 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:41:09.092273 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:41:09.092284 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:41:09.092296 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:41:09.092306 kernel: landlock: Up and running. Apr 21 10:41:09.092317 kernel: SELinux: Initializing. Apr 21 10:41:09.092328 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:41:09.092338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:41:09.092348 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:41:09.092359 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:41:09.092370 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:41:09.092380 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:41:09.092393 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:41:09.092403 kernel: signal: max sigframe size: 3632 Apr 21 10:41:09.092414 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:41:09.092424 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:41:09.092435 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:41:09.092445 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:41:09.092455 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:41:09.092466 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:41:09.092476 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:41:09.092488 kernel: smpboot: Max logical packages: 1 Apr 21 10:41:09.092499 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:41:09.092510 kernel: devtmpfs: initialized Apr 21 10:41:09.092520 kernel: x86/mm: Memory block size: 128MB Apr 21 10:41:09.092530 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 10:41:09.092541 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 10:41:09.092551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 21 10:41:09.092562 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 10:41:09.092572 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 10:41:09.092585 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:41:09.092596 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:41:09.092606 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:41:09.092641 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:41:09.092652 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:41:09.092663 kernel: audit: type=2000 audit(1776768066.925:1): state=initialized audit_enabled=0 res=1 Apr 21 10:41:09.092673 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:41:09.092684 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:41:09.092694 kernel: cpuidle: using governor menu Apr 21 10:41:09.092707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:41:09.092717 kernel: dca service started, version 1.12.1 Apr 21 10:41:09.092728 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:41:09.092739 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:41:09.092749 kernel: PCI: Using configuration type 1 for base access Apr 21 10:41:09.092760 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:41:09.092771 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:41:09.092781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:41:09.092792 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:41:09.092804 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:41:09.092814 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:41:09.092825 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:41:09.092835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:41:09.092846 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:41:09.092856 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:41:09.092867 kernel: ACPI: Interpreter enabled Apr 21 10:41:09.092877 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:41:09.092888 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:41:09.092901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:41:09.092911 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:41:09.092922 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:41:09.092932 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:41:09.093119 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:41:09.093216 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:41:09.093305 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:41:09.093320 kernel: PCI host bridge to bus 0000:00 Apr 21 10:41:09.093407 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:41:09.093486 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:41:09.093563 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:41:09.093764 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:41:09.093843 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:41:09.093919 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 21 10:41:09.094025 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:41:09.094127 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:41:09.094224 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:41:09.094313 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 21 10:41:09.094398 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 21 10:41:09.094486 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:41:09.094572 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:41:09.094694 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:41:09.094788 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:41:09.094876 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 21 10:41:09.094984 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 21 10:41:09.095073 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 21 10:41:09.095166 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:41:09.095282 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 21 10:41:09.095371 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 21 10:41:09.095457 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 21 10:41:09.095550 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:41:09.095668 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 21 10:41:09.095807 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 21 10:41:09.096002 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 21 10:41:09.096120 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 21 10:41:09.096211 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:41:09.096298 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:41:09.096395 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:41:09.096482 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 21 10:41:09.096569 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 21 10:41:09.096707 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:41:09.096798 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 21 10:41:09.096813 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:41:09.096824 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:41:09.096834 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:41:09.096846 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:41:09.096856 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:41:09.096867 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:41:09.096877 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:41:09.096891 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:41:09.096901 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:41:09.096912 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:41:09.096923 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:41:09.096933 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:41:09.096944 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:41:09.096974 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:41:09.096985 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:41:09.096995 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:41:09.097008 kernel: iommu: Default domain type: Translated Apr 21 10:41:09.097018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:41:09.097029 kernel: efivars: Registered efivars operations Apr 21 10:41:09.097041 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:41:09.097051 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:41:09.097062 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 10:41:09.097073 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 21 10:41:09.097083 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 21 10:41:09.097093 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 21 10:41:09.097184 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:41:09.097271 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:41:09.097358 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:41:09.097371 kernel: vgaarb: loaded Apr 21 10:41:09.097382 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:41:09.097392 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:41:09.097403 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:41:09.097414 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:41:09.097424 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:41:09.097454 kernel: pnp: PnP ACPI init Apr 21 10:41:09.097547 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:41:09.097563 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:41:09.097573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:41:09.097584 kernel: NET: Registered PF_INET protocol family Apr 21 10:41:09.097596 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:41:09.097607 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:41:09.097647 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:41:09.097661 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:41:09.097672 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:41:09.097683 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:41:09.097694 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:41:09.097704 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:41:09.097716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:41:09.097727 kernel: NET: Registered PF_XDP protocol family Apr 21 10:41:09.097819 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 21 10:41:09.097945 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 21 10:41:09.098139 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:41:09.098252 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:41:09.098331 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:41:09.098411 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:41:09.098488 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:41:09.098579 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 21 10:41:09.098593 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:41:09.098605 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:41:09.098645 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:41:09.098656 kernel: Initialise system trusted keyrings Apr 21 10:41:09.098667 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:41:09.098677 kernel: Key type asymmetric registered Apr 21 10:41:09.098688 kernel: Asymmetric key parser 'x509' registered Apr 21 10:41:09.098699 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:41:09.098709 kernel: io scheduler mq-deadline registered Apr 21 10:41:09.098720 kernel: io scheduler kyber registered Apr 21 10:41:09.098734 kernel: io scheduler bfq registered Apr 21 10:41:09.098745 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:41:09.098756 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:41:09.098767 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:41:09.098778 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:41:09.098788 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:41:09.098799 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:41:09.098810 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:41:09.098821 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:41:09.098832 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:41:09.098925 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:41:09.099028 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:41:09.099042 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:41:09.099121 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:41:08 UTC (1776768068) Apr 21 10:41:09.099200 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 10:41:09.099213 kernel: intel_pstate: CPU model not supported Apr 21 10:41:09.099224 kernel: efifb: probing for efifb Apr 21 10:41:09.099237 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 21 10:41:09.099248 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 21 10:41:09.099258 kernel: efifb: scrolling: redraw Apr 21 10:41:09.099272 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 21 10:41:09.099283 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:41:09.099294 kernel: fb0: EFI VGA frame buffer device Apr 21 10:41:09.099321 kernel: pstore: Using crash dump compression: deflate Apr 21 10:41:09.099334 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:41:09.099345 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:41:09.099357 kernel: Segment Routing with IPv6 Apr 21 10:41:09.099368 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:41:09.099379 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:41:09.099390 kernel: Key type dns_resolver registered Apr 21 10:41:09.099400 kernel: IPI shorthand broadcast: enabled Apr 21 10:41:09.099412 kernel: sched_clock: Marking stable (886017180, 305352327)->(1403844060, -212474553) Apr 21 10:41:09.099423 kernel: registered taskstats version 1 Apr 21 10:41:09.099433 kernel: Loading compiled-in X.509 certificates Apr 21 10:41:09.099445 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:41:09.099456 kernel: Key type .fscrypt registered Apr 21 10:41:09.099468 kernel: Key type fscrypt-provisioning registered Apr 21 10:41:09.099479 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:41:09.099490 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:41:09.099501 kernel: ima: No architecture policies found Apr 21 10:41:09.099512 kernel: clk: Disabling unused clocks Apr 21 10:41:09.099523 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:41:09.099535 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:41:09.099545 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:41:09.099557 kernel: Run /init as init process Apr 21 10:41:09.099570 kernel: with arguments: Apr 21 10:41:09.099581 kernel: /init Apr 21 10:41:09.099592 kernel: with environment: Apr 21 10:41:09.099602 kernel: HOME=/ Apr 21 10:41:09.099644 kernel: TERM=linux Apr 21 10:41:09.099660 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:41:09.099675 systemd[1]: Detected virtualization kvm. Apr 21 10:41:09.099689 systemd[1]: Detected architecture x86-64. Apr 21 10:41:09.099701 systemd[1]: Running in initrd. Apr 21 10:41:09.099713 systemd[1]: No hostname configured, using default hostname. Apr 21 10:41:09.099724 systemd[1]: Hostname set to . Apr 21 10:41:09.099737 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:41:09.099761 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:41:09.099775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:41:09.099787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:41:09.099809 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:41:09.099822 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:41:09.099844 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:41:09.099866 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:41:09.099902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:41:09.099924 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:41:09.099946 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:41:09.099976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:41:09.099998 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:41:09.100010 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:41:09.100022 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:41:09.100033 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:41:09.100050 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:41:09.100062 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:41:09.100073 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:41:09.100085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:41:09.100097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:41:09.100108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:41:09.100120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:41:09.100132 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:41:09.100143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:41:09.100157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:41:09.100170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:41:09.100181 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:41:09.100208 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:41:09.100220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:41:09.100243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:41:09.100256 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:41:09.100294 systemd-journald[194]: Collecting audit messages is disabled. Apr 21 10:41:09.100326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:41:09.100338 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:41:09.100354 systemd-journald[194]: Journal started Apr 21 10:41:09.100381 systemd-journald[194]: Runtime Journal (/run/log/journal/e09a3defe57144118d244b3cc8bbc6f1) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:41:09.100282 systemd-modules-load[195]: Inserted module 'overlay' Apr 21 10:41:09.118464 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:41:09.118542 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:41:09.131512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:41:09.134408 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:41:09.137072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:41:09.152904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:41:09.158403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:41:09.163239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:41:09.170490 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:41:09.184667 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:41:09.189918 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 21 10:41:09.191243 kernel: Bridge firewalling registered Apr 21 10:41:09.191281 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:41:09.204927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:41:09.207225 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:41:09.214813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:41:09.224312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:41:09.231673 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:41:09.238035 dracut-cmdline[226]: dracut-dracut-053 Apr 21 10:41:09.242124 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:41:09.287311 systemd-resolved[231]: Positive Trust Anchors: Apr 21 10:41:09.287343 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:41:09.287384 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:41:09.290349 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 21 10:41:09.291291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:41:09.293075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:41:09.374682 kernel: SCSI subsystem initialized Apr 21 10:41:09.387866 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:41:09.407699 kernel: iscsi: registered transport (tcp) Apr 21 10:41:09.437771 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:41:09.437857 kernel: QLogic iSCSI HBA Driver Apr 21 10:41:09.486531 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:41:09.507991 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:41:09.549911 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:41:09.550016 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:41:09.552278 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:41:09.606683 kernel: raid6: avx512x4 gen() 27672 MB/s Apr 21 10:41:09.624693 kernel: raid6: avx512x2 gen() 29261 MB/s Apr 21 10:41:09.641673 kernel: raid6: avx512x1 gen() 37492 MB/s Apr 21 10:41:09.658681 kernel: raid6: avx2x4 gen() 34067 MB/s Apr 21 10:41:09.675662 kernel: raid6: avx2x2 gen() 31753 MB/s Apr 21 10:41:09.694057 kernel: raid6: avx2x1 gen() 25539 MB/s Apr 21 10:41:09.694139 kernel: raid6: using algorithm avx512x1 gen() 37492 MB/s Apr 21 10:41:09.712589 kernel: raid6: .... xor() 21334 MB/s, rmw enabled Apr 21 10:41:09.712720 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:41:09.732673 kernel: xor: automatically using best checksumming function avx Apr 21 10:41:09.872679 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:41:09.882372 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:41:09.890848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:41:09.901070 systemd-udevd[412]: Using default interface naming scheme 'v255'. Apr 21 10:41:09.905066 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:41:09.906248 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:41:09.924670 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Apr 21 10:41:09.957415 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:41:09.965875 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:41:10.000749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:41:10.011802 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:41:10.024026 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:41:10.026207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:41:10.026541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:41:10.034797 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:41:10.044742 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:41:10.045761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:41:10.049034 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:41:10.055657 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:41:10.056519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:41:10.058759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:41:10.065660 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:41:10.065708 kernel: GPT:9289727 != 19775487 Apr 21 10:41:10.065718 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:41:10.065726 kernel: GPT:9289727 != 19775487 Apr 21 10:41:10.067230 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:41:10.067265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:41:10.068591 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:41:10.072389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:41:10.072801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:41:10.076608 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:41:10.082655 kernel: libata version 3.00 loaded. Apr 21 10:41:10.090125 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:41:10.090289 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:41:10.094895 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:41:10.095050 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:41:10.090920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:41:10.100119 kernel: scsi host0: ahci Apr 21 10:41:10.095553 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:41:10.103684 kernel: scsi host1: ahci Apr 21 10:41:10.105561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:41:10.112762 kernel: scsi host2: ahci Apr 21 10:41:10.112900 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:41:10.112914 kernel: scsi host3: ahci Apr 21 10:41:10.113038 kernel: AES CTR mode by8 optimization enabled Apr 21 10:41:10.113051 kernel: scsi host4: ahci Apr 21 10:41:10.113141 kernel: scsi host5: ahci Apr 21 10:41:10.113231 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 21 10:41:10.114212 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 21 10:41:10.115955 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 21 10:41:10.119341 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 21 10:41:10.119366 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 21 10:41:10.122766 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 21 10:41:10.132693 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (468) Apr 21 10:41:10.134839 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Apr 21 10:41:10.137783 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:41:10.144114 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:41:10.154243 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:41:10.155022 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:41:10.165431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:41:10.185897 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:41:10.189052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:41:10.198384 disk-uuid[557]: Primary Header is updated. Apr 21 10:41:10.198384 disk-uuid[557]: Secondary Entries is updated. Apr 21 10:41:10.198384 disk-uuid[557]: Secondary Header is updated. Apr 21 10:41:10.201874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:41:10.205648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:41:10.213800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:41:10.437666 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:41:10.437739 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:41:10.439898 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:41:10.440689 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:41:10.441660 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:41:10.443688 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:41:10.445317 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:41:10.445334 kernel: ata3.00: applying bridge limits Apr 21 10:41:10.447100 kernel: ata3.00: configured for UDMA/100 Apr 21 10:41:10.447646 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:41:10.498545 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:41:10.498817 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:41:10.512662 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:41:11.208742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:41:11.210026 disk-uuid[561]: The operation has completed successfully. Apr 21 10:41:11.232309 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:41:11.232401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:41:11.255867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:41:11.259471 sh[595]: Success Apr 21 10:41:11.270666 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:41:11.300302 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:41:11.311134 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:41:11.313073 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:41:11.324932 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:41:11.324984 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:41:11.325003 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:41:11.326580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:41:11.327678 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:41:11.333478 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:41:11.334474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:41:11.354037 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:41:11.355744 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:41:11.369656 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:41:11.369702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:41:11.369720 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:41:11.374677 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:41:11.382605 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:41:11.386378 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:41:11.392196 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:41:11.399828 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:41:11.445192 ignition[699]: Ignition 2.19.0 Apr 21 10:41:11.445202 ignition[699]: Stage: fetch-offline Apr 21 10:41:11.445226 ignition[699]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:41:11.445232 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:41:11.445302 ignition[699]: parsed url from cmdline: "" Apr 21 10:41:11.445305 ignition[699]: no config URL provided Apr 21 10:41:11.445308 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:41:11.445313 ignition[699]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:41:11.445334 ignition[699]: op(1): [started] loading QEMU firmware config module Apr 21 10:41:11.445344 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:41:11.452670 ignition[699]: op(1): [finished] loading QEMU firmware config module Apr 21 10:41:11.475409 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:41:11.488786 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:41:11.508108 systemd-networkd[784]: lo: Link UP Apr 21 10:41:11.508128 systemd-networkd[784]: lo: Gained carrier Apr 21 10:41:11.509040 systemd-networkd[784]: Enumeration completed Apr 21 10:41:11.509516 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:41:11.509518 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:41:11.509698 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:41:11.511629 systemd-networkd[784]: eth0: Link UP Apr 21 10:41:11.511632 systemd-networkd[784]: eth0: Gained carrier Apr 21 10:41:11.511638 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:41:11.512764 systemd[1]: Reached target network.target - Network. Apr 21 10:41:11.526679 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:41:11.578590 ignition[699]: parsing config with SHA512: ef4ed56b296fd7a95ee1f79bd1c21b5e77259de4757e69fc51a92c453631e4e4a63faee2a67cd744fe158899744d4a04f0d0b9c10262abe79033349fcae7ee32 Apr 21 10:41:11.582199 unknown[699]: fetched base config from "system" Apr 21 10:41:11.582209 unknown[699]: fetched user config from "qemu" Apr 21 10:41:11.582512 ignition[699]: fetch-offline: fetch-offline passed Apr 21 10:41:11.582554 ignition[699]: Ignition finished successfully Apr 21 10:41:11.585791 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:41:11.588319 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:41:11.594874 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:41:11.606915 ignition[788]: Ignition 2.19.0 Apr 21 10:41:11.606928 ignition[788]: Stage: kargs Apr 21 10:41:11.607095 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:41:11.607102 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:41:11.607721 ignition[788]: kargs: kargs passed Apr 21 10:41:11.607753 ignition[788]: Ignition finished successfully Apr 21 10:41:11.612550 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:41:11.619780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:41:11.631596 ignition[798]: Ignition 2.19.0 Apr 21 10:41:11.631650 ignition[798]: Stage: disks Apr 21 10:41:11.631779 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:41:11.631785 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:41:11.632484 ignition[798]: disks: disks passed Apr 21 10:41:11.632522 ignition[798]: Ignition finished successfully Apr 21 10:41:11.638073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:41:11.642104 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:41:11.643007 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:41:11.646079 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:41:11.649216 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:41:11.652425 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:41:11.670851 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:41:11.681226 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:41:11.685787 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:41:11.690794 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:41:11.778652 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:41:11.778595 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:41:11.779688 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:41:11.793722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:41:11.796489 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:41:11.798145 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:41:11.798176 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:41:11.798193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:41:11.812365 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Apr 21 10:41:11.802812 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:41:11.819051 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:41:11.819068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:41:11.819077 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:41:11.819086 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:41:11.804643 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:41:11.820152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:41:11.842667 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:41:11.846091 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:41:11.848801 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:41:11.853066 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:41:11.917111 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:41:11.924864 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:41:11.928550 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:41:11.933009 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:41:11.947429 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:41:11.951292 ignition[929]: INFO : Ignition 2.19.0 Apr 21 10:41:11.951292 ignition[929]: INFO : Stage: mount Apr 21 10:41:11.953358 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:41:11.953358 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:41:11.953358 ignition[929]: INFO : mount: mount passed Apr 21 10:41:11.953358 ignition[929]: INFO : Ignition finished successfully Apr 21 10:41:11.956673 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:41:11.969728 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:41:12.323032 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:41:12.339790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:41:12.348654 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Apr 21 10:41:12.348681 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:41:12.351269 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:41:12.351280 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:41:12.355654 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:41:12.356248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:41:12.375803 ignition[959]: INFO : Ignition 2.19.0 Apr 21 10:41:12.375803 ignition[959]: INFO : Stage: files Apr 21 10:41:12.377938 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:41:12.377938 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:41:12.377938 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:41:12.383253 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:41:12.383253 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:41:12.387475 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:41:12.387475 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:41:12.391444 unknown[959]: wrote ssh authorized keys file for user: core Apr 21 10:41:12.393062 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:41:12.393062 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:41:12.393062 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:41:12.500539 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:41:12.732958 systemd-networkd[784]: eth0: Gained IPv6LL Apr 21 10:41:12.844907 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:41:12.844907 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:41:12.850372 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 10:41:13.102603 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:41:13.260248 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:41:13.264891 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:41:13.307679 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:41:13.307679 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:41:13.307679 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:41:13.307679 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:41:13.307679 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:41:13.509182 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:41:13.729360 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:41:13.729360 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 21 10:41:13.734719 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:41:13.758431 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:41:13.758431 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:41:13.758431 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:41:13.758431 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:41:13.758431 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:41:13.758431 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:41:13.773176 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:41:13.773176 ignition[959]: INFO : files: files passed Apr 21 10:41:13.773176 ignition[959]: INFO : Ignition finished successfully Apr 21 10:41:13.774180 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:41:13.791941 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:41:13.796017 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:41:13.797307 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:41:13.797418 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:41:13.808603 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:41:13.813737 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:41:13.816376 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:41:13.816973 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:41:13.816469 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:41:13.817575 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:41:13.828805 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:41:13.851645 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:41:13.851740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:41:13.854939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:41:13.857959 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:41:13.860684 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:41:13.865103 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:41:13.881770 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:41:13.883428 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:41:13.896792 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:41:13.898203 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:41:13.901716 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:41:13.907753 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:41:13.907864 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:41:13.912666 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:41:13.913532 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:41:13.913909 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:41:13.918700 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:41:13.925502 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:41:13.928993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:41:13.929968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:41:13.933053 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:41:13.938847 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:41:13.941959 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:41:13.944509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:41:13.945994 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:41:13.949450 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:41:13.953319 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:41:13.955424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:41:13.957210 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:41:13.960590 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:41:13.960737 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:41:13.964797 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:41:13.964908 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:41:13.968485 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:41:13.971044 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:41:13.976717 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:41:13.977474 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:41:13.981325 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:41:13.986064 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:41:13.986177 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:41:13.988828 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:41:13.988916 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:41:13.989711 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:41:13.989816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:41:13.995892 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:41:13.996005 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:41:14.015895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:41:14.019110 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:41:14.020408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:41:14.020513 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:41:14.030600 ignition[1013]: INFO : Ignition 2.19.0 Apr 21 10:41:14.030600 ignition[1013]: INFO : Stage: umount Apr 21 10:41:14.030600 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:41:14.030600 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:41:14.022424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:41:14.039451 ignition[1013]: INFO : umount: umount passed Apr 21 10:41:14.039451 ignition[1013]: INFO : Ignition finished successfully Apr 21 10:41:14.022490 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:41:14.027586 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:41:14.027703 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:41:14.032404 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:41:14.032522 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:41:14.033778 systemd[1]: Stopped target network.target - Network. Apr 21 10:41:14.036417 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:41:14.036461 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:41:14.041713 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:41:14.041757 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:41:14.047102 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:41:14.047145 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:41:14.064511 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:41:14.064578 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:41:14.068650 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:41:14.070355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:41:14.072225 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:41:14.074689 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 21 10:41:14.076447 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:41:14.076577 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:41:14.082139 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:41:14.082259 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:41:14.088575 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:41:14.088665 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:41:14.108779 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:41:14.109416 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:41:14.109469 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:41:14.112117 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:41:14.112153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:41:14.116285 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:41:14.116314 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:41:14.119850 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:41:14.119897 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:41:14.121150 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:41:14.132951 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:41:14.133059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:41:14.147317 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:41:14.147450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:41:14.150955 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:41:14.151048 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:41:14.154153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:41:14.154192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:41:14.156746 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:41:14.156771 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:41:14.159604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:41:14.159661 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:41:14.163949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:41:14.164001 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:41:14.168220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:41:14.168254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:41:14.171790 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:41:14.171816 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:41:14.186764 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:41:14.187433 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:41:14.187474 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:41:14.191062 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:41:14.191094 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:41:14.194125 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:41:14.194154 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:41:14.201016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:41:14.201055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:41:14.206060 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:41:14.206132 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:41:14.209972 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:41:14.218959 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:41:14.227855 systemd[1]: Switching root. Apr 21 10:41:14.251833 systemd-journald[194]: Journal stopped Apr 21 10:41:14.973651 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 21 10:41:14.973703 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:41:14.973715 kernel: SELinux: policy capability open_perms=1 Apr 21 10:41:14.973723 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:41:14.973732 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:41:14.973740 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:41:14.973748 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:41:14.973758 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:41:14.973766 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:41:14.973773 kernel: audit: type=1403 audit(1776768074.382:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:41:14.973784 systemd[1]: Successfully loaded SELinux policy in 34.381ms. Apr 21 10:41:14.973803 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.055ms. Apr 21 10:41:14.973812 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:41:14.973820 systemd[1]: Detected virtualization kvm. Apr 21 10:41:14.973829 systemd[1]: Detected architecture x86-64. Apr 21 10:41:14.973838 systemd[1]: Detected first boot. Apr 21 10:41:14.973846 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:41:14.973854 zram_generator::config[1058]: No configuration found. Apr 21 10:41:14.973863 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:41:14.973871 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:41:14.973878 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:41:14.973886 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:41:14.973895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:41:14.973905 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:41:14.973934 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:41:14.973943 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:41:14.973952 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:41:14.973960 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:41:14.973967 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:41:14.973976 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:41:14.974013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:41:14.974022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:41:14.974030 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:41:14.974039 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:41:14.974047 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:41:14.974056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:41:14.974063 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:41:14.974071 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:41:14.974078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:41:14.974086 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:41:14.974094 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:41:14.974105 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:41:14.974112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:41:14.974120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:41:14.974128 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:41:14.974135 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:41:14.974147 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:41:14.974155 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:41:14.974162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:41:14.974172 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:41:14.974180 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:41:14.974188 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:41:14.974212 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:41:14.974220 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:41:14.974237 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:41:14.974245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:41:14.974253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:41:14.974261 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:41:14.974270 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:41:14.974278 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:41:14.974286 systemd[1]: Reached target machines.target - Containers. Apr 21 10:41:14.974293 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:41:14.974302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:41:14.974309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:41:14.974317 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:41:14.974325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:41:14.974336 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:41:14.974357 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:41:14.974366 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:41:14.974375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:41:14.974383 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:41:14.974390 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:41:14.974398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:41:14.974406 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:41:14.974425 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:41:14.974434 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:41:14.974442 kernel: fuse: init (API version 7.39) Apr 21 10:41:14.974449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:41:14.974457 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:41:14.974465 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:41:14.974472 kernel: ACPI: bus type drm_connector registered Apr 21 10:41:14.974489 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:41:14.974508 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:41:14.974516 systemd[1]: Stopped verity-setup.service. Apr 21 10:41:14.974525 kernel: loop: module loaded Apr 21 10:41:14.974545 systemd-journald[1143]: Collecting audit messages is disabled. Apr 21 10:41:14.974574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:41:14.974582 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:41:14.974592 systemd-journald[1143]: Journal started Apr 21 10:41:14.974633 systemd-journald[1143]: Runtime Journal (/run/log/journal/e09a3defe57144118d244b3cc8bbc6f1) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:41:14.688042 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:41:14.703373 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:41:14.703738 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:41:14.978708 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:41:14.979505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:41:14.981313 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:41:14.982881 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:41:14.984534 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:41:14.986203 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:41:14.987770 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:41:14.989699 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:41:14.991746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:41:14.991859 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:41:14.993978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:41:14.994124 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:41:14.995957 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:41:14.996086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:41:14.997870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:41:14.997973 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:41:15.000045 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:41:15.000186 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:41:15.002045 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:41:15.002167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:41:15.004023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:41:15.005889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:41:15.007870 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:41:15.017032 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:41:15.034843 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:41:15.037738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:41:15.039508 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:41:15.039534 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:41:15.042143 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:41:15.044690 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:41:15.047055 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:41:15.048551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:41:15.049335 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:41:15.051780 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:41:15.054065 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:41:15.055807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:41:15.057554 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:41:15.061299 systemd-journald[1143]: Time spent on flushing to /var/log/journal/e09a3defe57144118d244b3cc8bbc6f1 is 11.652ms for 994 entries. Apr 21 10:41:15.061299 systemd-journald[1143]: System Journal (/var/log/journal/e09a3defe57144118d244b3cc8bbc6f1) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:41:15.080339 systemd-journald[1143]: Received client request to flush runtime journal. Apr 21 10:41:15.080377 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:41:15.058958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:41:15.064053 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:41:15.068812 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:41:15.072369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:41:15.073242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:41:15.080869 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:41:15.083300 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:41:15.086023 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:41:15.089094 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:41:15.091288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:41:15.097143 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:41:15.105113 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:41:15.105817 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Apr 21 10:41:15.105825 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Apr 21 10:41:15.107283 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:41:15.111730 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:41:15.113875 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:41:15.122804 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:41:15.131304 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:41:15.159649 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:41:15.172391 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:41:15.180050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:41:15.184883 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:41:15.185411 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:41:15.192713 kernel: loop2: detected capacity change from 0 to 228704 Apr 21 10:41:15.195874 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 21 10:41:15.195897 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 21 10:41:15.198582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:41:15.238784 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:41:15.248656 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 10:41:15.265672 kernel: loop5: detected capacity change from 0 to 228704 Apr 21 10:41:15.273569 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:41:15.273883 (sd-merge)[1202]: Merged extensions into '/usr'. Apr 21 10:41:15.277568 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:41:15.277595 systemd[1]: Reloading... Apr 21 10:41:15.311762 zram_generator::config[1224]: No configuration found. Apr 21 10:41:15.355946 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:41:15.392193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:41:15.420787 systemd[1]: Reloading finished in 142 ms. Apr 21 10:41:15.450705 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:41:15.452674 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:41:15.454602 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:41:15.470760 systemd[1]: Starting ensure-sysext.service... Apr 21 10:41:15.473014 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:41:15.475555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:41:15.478719 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:41:15.478740 systemd[1]: Reloading... Apr 21 10:41:15.489707 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:41:15.489917 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:41:15.490426 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:41:15.490597 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 21 10:41:15.490683 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 21 10:41:15.492322 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:41:15.492342 systemd-tmpfiles[1267]: Skipping /boot Apr 21 10:41:15.495702 systemd-udevd[1268]: Using default interface naming scheme 'v255'. Apr 21 10:41:15.497287 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:41:15.497309 systemd-tmpfiles[1267]: Skipping /boot Apr 21 10:41:15.525699 zram_generator::config[1300]: No configuration found. Apr 21 10:41:15.552664 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1314) Apr 21 10:41:15.579645 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:41:15.585650 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:41:15.599849 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:41:15.600073 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:41:15.603014 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:41:15.607780 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:41:15.615760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:41:15.638930 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:41:15.681664 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:41:15.685201 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:41:15.685502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:41:15.689827 systemd[1]: Reloading finished in 210 ms. Apr 21 10:41:15.727183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:41:15.746131 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:41:15.763207 systemd[1]: Finished ensure-sysext.service. Apr 21 10:41:15.764798 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:41:15.778539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:41:15.789954 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:41:15.792922 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:41:15.794809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:41:15.795780 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:41:15.798440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:41:15.800595 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:41:15.805579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:41:15.808696 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:41:15.811330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:41:15.813379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:41:15.814180 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:41:15.816980 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:41:15.820786 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:41:15.824876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:41:15.829180 augenrules[1389]: No rules Apr 21 10:41:15.830760 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:41:15.833661 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:41:15.836429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:41:15.838335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:41:15.838963 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:41:15.841305 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:41:15.843877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:41:15.844037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:41:15.846091 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:41:15.846219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:41:15.848177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:41:15.848283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:41:15.850340 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:41:15.850455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:41:15.851263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:41:15.851478 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:41:15.857550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:41:15.859449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:41:15.867961 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:41:15.868553 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:41:15.868603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:41:15.870337 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:41:15.872728 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:41:15.871925 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:41:15.872765 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:41:15.873370 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:41:15.881899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:41:15.886398 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:41:15.895842 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:41:15.898352 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:41:15.943272 systemd-networkd[1386]: lo: Link UP Apr 21 10:41:15.943278 systemd-networkd[1386]: lo: Gained carrier Apr 21 10:41:15.944126 systemd-networkd[1386]: Enumeration completed Apr 21 10:41:15.944383 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:41:15.944549 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:41:15.944551 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:41:15.945183 systemd-networkd[1386]: eth0: Link UP Apr 21 10:41:15.945185 systemd-networkd[1386]: eth0: Gained carrier Apr 21 10:41:15.945194 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:41:15.946126 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:41:15.948142 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:41:15.953413 systemd-resolved[1388]: Positive Trust Anchors: Apr 21 10:41:15.953445 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:41:15.953471 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:41:15.955768 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:41:15.957040 systemd-resolved[1388]: Defaulting to hostname 'linux'. Apr 21 10:41:15.958515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:41:15.960079 systemd[1]: Reached target network.target - Network. Apr 21 10:41:15.962095 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:41:15.963684 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:41:15.965162 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:41:15.966795 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:41:15.968602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:41:15.969685 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:41:15.970223 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:41:15.971037 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Apr 21 10:41:15.972110 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:41:15.973707 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:41:15.973823 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:41:15.973869 systemd-timesyncd[1390]: Initial clock synchronization to Tue 2026-04-21 10:41:15.615280 UTC. Apr 21 10:41:15.973961 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:41:15.975197 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:41:15.976905 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:41:15.979374 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:41:15.995301 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:41:15.997300 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:41:15.998758 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:41:16.000079 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:41:16.001322 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:41:16.001352 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:41:16.002160 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:41:16.004226 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:41:16.006194 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:41:16.007654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:41:16.009046 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:41:16.010773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:41:16.013796 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:41:16.016790 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:41:16.018698 extend-filesystems[1434]: Found loop3 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found loop4 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found loop5 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found sr0 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda1 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda2 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda3 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found usr Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda4 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda6 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda7 Apr 21 10:41:16.022387 extend-filesystems[1434]: Found vda9 Apr 21 10:41:16.022387 extend-filesystems[1434]: Checking size of /dev/vda9 Apr 21 10:41:16.027530 jq[1433]: false Apr 21 10:41:16.025151 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:41:16.028159 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:41:16.029236 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:41:16.029726 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:41:16.030274 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:41:16.031780 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:41:16.033197 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:41:16.034124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:41:16.042688 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:41:16.043098 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:41:16.044111 update_engine[1448]: I20260421 10:41:16.044050 1448 main.cc:92] Flatcar Update Engine starting Apr 21 10:41:16.045272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:41:16.045399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:41:16.046201 dbus-daemon[1432]: [system] SELinux support is enabled Apr 21 10:41:16.047424 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:41:16.049979 jq[1449]: true Apr 21 10:41:16.053436 update_engine[1448]: I20260421 10:41:16.053375 1448 update_check_scheduler.cc:74] Next update check in 2m0s Apr 21 10:41:16.055088 extend-filesystems[1434]: Resized partition /dev/vda9 Apr 21 10:41:16.057556 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:41:16.071721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1306) Apr 21 10:41:16.071742 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:41:16.062441 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:41:16.071899 tar[1451]: linux-amd64/LICENSE Apr 21 10:41:16.071899 tar[1451]: linux-amd64/helm Apr 21 10:41:16.063245 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:41:16.063263 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:41:16.067947 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:41:16.067960 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:41:16.073546 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:41:16.076662 jq[1457]: true Apr 21 10:41:16.082914 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:41:16.095110 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:41:16.095521 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:41:16.099847 systemd-logind[1446]: New seat seat0. Apr 21 10:41:16.104541 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:41:16.122522 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:41:16.136139 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:41:16.136352 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:41:16.136352 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:41:16.136352 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:41:16.147310 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Apr 21 10:41:16.149682 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:41:16.136948 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:41:16.137098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:41:16.143286 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:41:16.148691 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:41:16.214988 containerd[1463]: time="2026-04-21T10:41:16.214886152Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:41:16.230842 containerd[1463]: time="2026-04-21T10:41:16.230794526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232257 containerd[1463]: time="2026-04-21T10:41:16.232202329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232257 containerd[1463]: time="2026-04-21T10:41:16.232233916Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:41:16.232257 containerd[1463]: time="2026-04-21T10:41:16.232245025Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:41:16.232383 containerd[1463]: time="2026-04-21T10:41:16.232349314Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:41:16.232383 containerd[1463]: time="2026-04-21T10:41:16.232362946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232416 containerd[1463]: time="2026-04-21T10:41:16.232397603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232416 containerd[1463]: time="2026-04-21T10:41:16.232405817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232562 containerd[1463]: time="2026-04-21T10:41:16.232512796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232562 containerd[1463]: time="2026-04-21T10:41:16.232539478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232562 containerd[1463]: time="2026-04-21T10:41:16.232550410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232562 containerd[1463]: time="2026-04-21T10:41:16.232557336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232658 containerd[1463]: time="2026-04-21T10:41:16.232652182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232823 containerd[1463]: time="2026-04-21T10:41:16.232778541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232898 containerd[1463]: time="2026-04-21T10:41:16.232868038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:41:16.232898 containerd[1463]: time="2026-04-21T10:41:16.232887251Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:41:16.232968 containerd[1463]: time="2026-04-21T10:41:16.232935007Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:41:16.232968 containerd[1463]: time="2026-04-21T10:41:16.232961704Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:41:16.237256 containerd[1463]: time="2026-04-21T10:41:16.237212397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:41:16.237256 containerd[1463]: time="2026-04-21T10:41:16.237252860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:41:16.237296 containerd[1463]: time="2026-04-21T10:41:16.237266014Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:41:16.237296 containerd[1463]: time="2026-04-21T10:41:16.237282083Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:41:16.237296 containerd[1463]: time="2026-04-21T10:41:16.237292381Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:41:16.237400 containerd[1463]: time="2026-04-21T10:41:16.237374520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:41:16.237551 containerd[1463]: time="2026-04-21T10:41:16.237531138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:41:16.237686 containerd[1463]: time="2026-04-21T10:41:16.237647898Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:41:16.237686 containerd[1463]: time="2026-04-21T10:41:16.237670982Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:41:16.237686 containerd[1463]: time="2026-04-21T10:41:16.237680136Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:41:16.237724 containerd[1463]: time="2026-04-21T10:41:16.237689253Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237724 containerd[1463]: time="2026-04-21T10:41:16.237697760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237724 containerd[1463]: time="2026-04-21T10:41:16.237706050Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237724 containerd[1463]: time="2026-04-21T10:41:16.237715211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237778 containerd[1463]: time="2026-04-21T10:41:16.237729860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237778 containerd[1463]: time="2026-04-21T10:41:16.237739133Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237778 containerd[1463]: time="2026-04-21T10:41:16.237748015Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237778 containerd[1463]: time="2026-04-21T10:41:16.237755359Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:41:16.237778 containerd[1463]: time="2026-04-21T10:41:16.237771559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237780161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237788327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237800991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237808962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237817359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237824865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237835 containerd[1463]: time="2026-04-21T10:41:16.237833394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237842517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237852047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237859363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237867369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237874990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237884119Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237914171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237921878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.237933 containerd[1463]: time="2026-04-21T10:41:16.237930058Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.237961609Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.237973304Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.237980742Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.237992754Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.237999268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.238009146Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.238015765Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:41:16.238037 containerd[1463]: time="2026-04-21T10:41:16.238022169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:41:16.238264 containerd[1463]: time="2026-04-21T10:41:16.238202630Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:41:16.238264 containerd[1463]: time="2026-04-21T10:41:16.238257257Z" level=info msg="Connect containerd service" Apr 21 10:41:16.238377 containerd[1463]: time="2026-04-21T10:41:16.238282467Z" level=info msg="using legacy CRI server" Apr 21 10:41:16.238377 containerd[1463]: time="2026-04-21T10:41:16.238287347Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:41:16.238377 containerd[1463]: time="2026-04-21T10:41:16.238345179Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:41:16.239732 containerd[1463]: time="2026-04-21T10:41:16.239699554Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.239910841Z" level=info msg="Start subscribing containerd event" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.239970518Z" level=info msg="Start recovering state" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.239978370Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.240044567Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.240052650Z" level=info msg="Start event monitor" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.240079224Z" level=info msg="Start snapshots syncer" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.240086108Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.240093878Z" level=info msg="Start streaming server" Apr 21 10:41:16.240248 containerd[1463]: time="2026-04-21T10:41:16.240146662Z" level=info msg="containerd successfully booted in 0.025936s" Apr 21 10:41:16.240368 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:41:16.463852 tar[1451]: linux-amd64/README.md Apr 21 10:41:16.476918 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:41:16.491572 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:41:16.508416 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:41:16.516894 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:41:16.521654 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:41:16.521793 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:41:16.524248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:41:16.533410 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:41:16.535985 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:41:16.538096 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:41:16.539652 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:41:17.276941 systemd-networkd[1386]: eth0: Gained IPv6LL Apr 21 10:41:17.279196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:41:17.281366 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:41:17.294064 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:41:17.296879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:17.299432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:41:17.312959 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:41:17.313107 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:41:17.315276 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:41:17.316141 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:41:17.892028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:17.893960 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:41:17.895705 systemd[1]: Startup finished in 1.095s (kernel) + 5.574s (initrd) + 3.546s (userspace) = 10.217s. Apr 21 10:41:17.895991 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:41:18.261754 kubelet[1544]: E0421 10:41:18.261548 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:41:18.263786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:41:18.263898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:41:22.284794 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:41:22.286296 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:56796.service - OpenSSH per-connection server daemon (10.0.0.1:56796). Apr 21 10:41:22.351820 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:22.352913 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:22.365814 systemd-logind[1446]: New session 1 of user core. Apr 21 10:41:22.366934 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:41:22.375896 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:41:22.394685 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:41:22.407990 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:41:22.415682 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:41:22.586208 systemd[1561]: Queued start job for default target default.target. Apr 21 10:41:22.603798 systemd[1561]: Created slice app.slice - User Application Slice. Apr 21 10:41:22.603842 systemd[1561]: Reached target paths.target - Paths. Apr 21 10:41:22.603855 systemd[1561]: Reached target timers.target - Timers. Apr 21 10:41:22.607179 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:41:22.620827 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:41:22.620919 systemd[1561]: Reached target sockets.target - Sockets. Apr 21 10:41:22.620932 systemd[1561]: Reached target basic.target - Basic System. Apr 21 10:41:22.620963 systemd[1561]: Reached target default.target - Main User Target. Apr 21 10:41:22.620987 systemd[1561]: Startup finished in 193ms. Apr 21 10:41:22.621243 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:41:22.622681 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:41:22.687708 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:56798.service - OpenSSH per-connection server daemon (10.0.0.1:56798). Apr 21 10:41:22.745442 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 56798 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:22.748387 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:22.754411 systemd-logind[1446]: New session 2 of user core. Apr 21 10:41:22.768835 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:41:22.829065 sshd[1572]: pam_unix(sshd:session): session closed for user core Apr 21 10:41:22.837099 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:56798.service: Deactivated successfully. Apr 21 10:41:22.841125 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:41:22.843471 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:56814.service - OpenSSH per-connection server daemon (10.0.0.1:56814). Apr 21 10:41:22.843804 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:41:22.846184 systemd-logind[1446]: Removed session 2. Apr 21 10:41:22.886844 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 56814 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:22.888081 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:22.892205 systemd-logind[1446]: New session 3 of user core. Apr 21 10:41:22.906837 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:41:22.958045 sshd[1579]: pam_unix(sshd:session): session closed for user core Apr 21 10:41:22.970378 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:56814.service: Deactivated successfully. Apr 21 10:41:22.971889 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:41:22.979360 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:41:22.987053 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:56822.service - OpenSSH per-connection server daemon (10.0.0.1:56822). Apr 21 10:41:22.988398 systemd-logind[1446]: Removed session 3. Apr 21 10:41:23.021518 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 56822 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:23.026003 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:23.031833 systemd-logind[1446]: New session 4 of user core. Apr 21 10:41:23.039835 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:41:23.098530 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 21 10:41:23.108997 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:56822.service: Deactivated successfully. Apr 21 10:41:23.110330 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:41:23.111984 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:41:23.114289 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:56824.service - OpenSSH per-connection server daemon (10.0.0.1:56824). Apr 21 10:41:23.115932 systemd-logind[1446]: Removed session 4. Apr 21 10:41:23.154806 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 56824 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:23.155669 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:23.160666 systemd-logind[1446]: New session 5 of user core. Apr 21 10:41:23.172813 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:41:23.232156 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:41:23.232454 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:41:23.257057 sudo[1597]: pam_unix(sudo:session): session closed for user root Apr 21 10:41:23.259809 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 21 10:41:23.277447 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:56824.service: Deactivated successfully. Apr 21 10:41:23.278956 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:41:23.280272 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:41:23.281372 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:56828.service - OpenSSH per-connection server daemon (10.0.0.1:56828). Apr 21 10:41:23.284184 systemd-logind[1446]: Removed session 5. Apr 21 10:41:23.325957 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 56828 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:23.326423 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:23.332968 systemd-logind[1446]: New session 6 of user core. Apr 21 10:41:23.347796 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:41:23.399860 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:41:23.401019 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:41:23.404993 sudo[1606]: pam_unix(sudo:session): session closed for user root Apr 21 10:41:23.410029 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:41:23.410314 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:41:23.443948 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:41:23.445760 auditctl[1609]: No rules Apr 21 10:41:23.446185 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:41:23.446359 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:41:23.449824 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:41:23.504439 augenrules[1627]: No rules Apr 21 10:41:23.506828 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:41:23.512114 sudo[1605]: pam_unix(sudo:session): session closed for user root Apr 21 10:41:23.517966 sshd[1602]: pam_unix(sshd:session): session closed for user core Apr 21 10:41:23.539465 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:56828.service: Deactivated successfully. Apr 21 10:41:23.541655 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:41:23.549103 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:41:23.567114 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:56830.service - OpenSSH per-connection server daemon (10.0.0.1:56830). Apr 21 10:41:23.569107 systemd-logind[1446]: Removed session 6. Apr 21 10:41:23.602684 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 56830 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:41:23.603995 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:41:23.613226 systemd-logind[1446]: New session 7 of user core. Apr 21 10:41:23.628445 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:41:23.682003 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:41:23.683327 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:41:24.087936 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:41:24.087988 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:41:24.455378 dockerd[1657]: time="2026-04-21T10:41:24.455157505Z" level=info msg="Starting up" Apr 21 10:41:24.661215 dockerd[1657]: time="2026-04-21T10:41:24.661109268Z" level=info msg="Loading containers: start." Apr 21 10:41:24.879657 kernel: Initializing XFRM netlink socket Apr 21 10:41:25.015914 systemd-networkd[1386]: docker0: Link UP Apr 21 10:41:25.040490 dockerd[1657]: time="2026-04-21T10:41:25.040423320Z" level=info msg="Loading containers: done." Apr 21 10:41:25.056311 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck764401630-merged.mount: Deactivated successfully. Apr 21 10:41:25.059323 dockerd[1657]: time="2026-04-21T10:41:25.059258184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:41:25.059451 dockerd[1657]: time="2026-04-21T10:41:25.059416357Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:41:25.059567 dockerd[1657]: time="2026-04-21T10:41:25.059534607Z" level=info msg="Daemon has completed initialization" Apr 21 10:41:25.097834 dockerd[1657]: time="2026-04-21T10:41:25.097595739Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:41:25.097966 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:41:25.465783 containerd[1463]: time="2026-04-21T10:41:25.465259916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:41:25.853777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300659260.mount: Deactivated successfully. Apr 21 10:41:26.469213 containerd[1463]: time="2026-04-21T10:41:26.469143939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:26.469784 containerd[1463]: time="2026-04-21T10:41:26.469741269Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 21 10:41:26.470924 containerd[1463]: time="2026-04-21T10:41:26.470850727Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:26.473016 containerd[1463]: time="2026-04-21T10:41:26.472976272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:26.473887 containerd[1463]: time="2026-04-21T10:41:26.473866407Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.008574969s" Apr 21 10:41:26.473927 containerd[1463]: time="2026-04-21T10:41:26.473894626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:41:26.474511 containerd[1463]: time="2026-04-21T10:41:26.474467388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:41:27.226293 containerd[1463]: time="2026-04-21T10:41:27.226234093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:27.227048 containerd[1463]: time="2026-04-21T10:41:27.226997501Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 21 10:41:27.227857 containerd[1463]: time="2026-04-21T10:41:27.227809149Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:27.230141 containerd[1463]: time="2026-04-21T10:41:27.230083787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:27.231020 containerd[1463]: time="2026-04-21T10:41:27.230986772Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 756.477715ms" Apr 21 10:41:27.231020 containerd[1463]: time="2026-04-21T10:41:27.231018093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:41:27.231525 containerd[1463]: time="2026-04-21T10:41:27.231496123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:41:27.961717 containerd[1463]: time="2026-04-21T10:41:27.961667273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:27.962243 containerd[1463]: time="2026-04-21T10:41:27.962183587Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 21 10:41:27.963190 containerd[1463]: time="2026-04-21T10:41:27.963145805Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:27.965588 containerd[1463]: time="2026-04-21T10:41:27.965551677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:27.966484 containerd[1463]: time="2026-04-21T10:41:27.966459323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 734.924087ms" Apr 21 10:41:27.966517 containerd[1463]: time="2026-04-21T10:41:27.966490542Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:41:27.967057 containerd[1463]: time="2026-04-21T10:41:27.967023062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:41:28.466546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:41:28.472791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:28.618823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:28.622940 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:41:28.688414 kubelet[1884]: E0421 10:41:28.688354 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:41:28.691691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:41:28.691796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:41:28.735032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804683574.mount: Deactivated successfully. Apr 21 10:41:28.991252 containerd[1463]: time="2026-04-21T10:41:28.991113341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:28.991842 containerd[1463]: time="2026-04-21T10:41:28.991799255Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 21 10:41:28.992693 containerd[1463]: time="2026-04-21T10:41:28.992662086Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:28.994456 containerd[1463]: time="2026-04-21T10:41:28.994412849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:28.994799 containerd[1463]: time="2026-04-21T10:41:28.994755917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.027701214s" Apr 21 10:41:28.994799 containerd[1463]: time="2026-04-21T10:41:28.994784220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:41:28.995375 containerd[1463]: time="2026-04-21T10:41:28.995355514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:41:29.431480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913724976.mount: Deactivated successfully. Apr 21 10:41:29.969536 containerd[1463]: time="2026-04-21T10:41:29.969461997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:29.970101 containerd[1463]: time="2026-04-21T10:41:29.970064138Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 21 10:41:29.971250 containerd[1463]: time="2026-04-21T10:41:29.970970186Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:29.973456 containerd[1463]: time="2026-04-21T10:41:29.973394497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:29.974374 containerd[1463]: time="2026-04-21T10:41:29.974341296Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 978.959575ms" Apr 21 10:41:29.974374 containerd[1463]: time="2026-04-21T10:41:29.974368629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:41:29.974912 containerd[1463]: time="2026-04-21T10:41:29.974888051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:41:30.337989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761365124.mount: Deactivated successfully. Apr 21 10:41:30.343300 containerd[1463]: time="2026-04-21T10:41:30.343266542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:30.344001 containerd[1463]: time="2026-04-21T10:41:30.343948829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 10:41:30.344702 containerd[1463]: time="2026-04-21T10:41:30.344661659Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:30.346642 containerd[1463]: time="2026-04-21T10:41:30.346598957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:30.347347 containerd[1463]: time="2026-04-21T10:41:30.347300264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 372.346258ms" Apr 21 10:41:30.347408 containerd[1463]: time="2026-04-21T10:41:30.347344449Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:41:30.348000 containerd[1463]: time="2026-04-21T10:41:30.347871530Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:41:30.714377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917679372.mount: Deactivated successfully. Apr 21 10:41:31.250890 containerd[1463]: time="2026-04-21T10:41:31.250827113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:31.251639 containerd[1463]: time="2026-04-21T10:41:31.251568829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 21 10:41:31.252472 containerd[1463]: time="2026-04-21T10:41:31.252424050Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:31.254589 containerd[1463]: time="2026-04-21T10:41:31.254543999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:31.255661 containerd[1463]: time="2026-04-21T10:41:31.255633693Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 907.712639ms" Apr 21 10:41:31.255700 containerd[1463]: time="2026-04-21T10:41:31.255667726Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:41:33.090209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:33.101837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:33.118973 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit session-7.scope)... Apr 21 10:41:33.119007 systemd[1]: Reloading... Apr 21 10:41:33.171231 zram_generator::config[2087]: No configuration found. Apr 21 10:41:33.240119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:41:33.283989 systemd[1]: Reloading finished in 164 ms. Apr 21 10:41:33.321348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:33.323350 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:41:33.323505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:33.324641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:33.417115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:33.420116 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:41:33.451004 kubelet[2137]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:41:33.451004 kubelet[2137]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:41:33.451004 kubelet[2137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:41:33.451225 kubelet[2137]: I0421 10:41:33.451072 2137 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:41:33.863306 kubelet[2137]: I0421 10:41:33.863194 2137 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:41:33.863306 kubelet[2137]: I0421 10:41:33.863227 2137 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:41:33.863511 kubelet[2137]: I0421 10:41:33.863477 2137 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:41:33.887349 kubelet[2137]: E0421 10:41:33.887305 2137 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:41:33.888147 kubelet[2137]: I0421 10:41:33.888100 2137 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:41:33.893826 kubelet[2137]: E0421 10:41:33.893768 2137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:41:33.893826 kubelet[2137]: I0421 10:41:33.893800 2137 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:41:33.896824 kubelet[2137]: I0421 10:41:33.896758 2137 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:41:33.897460 kubelet[2137]: I0421 10:41:33.897410 2137 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:41:33.897631 kubelet[2137]: I0421 10:41:33.897443 2137 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:41:33.897631 kubelet[2137]: I0421 10:41:33.897598 2137 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:41:33.897631 kubelet[2137]: I0421 10:41:33.897632 2137 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:41:33.897756 kubelet[2137]: I0421 10:41:33.897729 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:41:33.901362 kubelet[2137]: I0421 10:41:33.901299 2137 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:41:33.901362 kubelet[2137]: I0421 10:41:33.901339 2137 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:41:33.901439 kubelet[2137]: I0421 10:41:33.901383 2137 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:41:33.902818 kubelet[2137]: I0421 10:41:33.902791 2137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:41:33.905384 kubelet[2137]: I0421 10:41:33.905034 2137 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:41:33.905456 kubelet[2137]: I0421 10:41:33.905412 2137 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:41:33.906253 kubelet[2137]: W0421 10:41:33.906209 2137 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:41:33.911193 kubelet[2137]: E0421 10:41:33.909326 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:41:33.911193 kubelet[2137]: E0421 10:41:33.909539 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:41:33.912787 kubelet[2137]: I0421 10:41:33.912753 2137 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:41:33.912846 kubelet[2137]: I0421 10:41:33.912831 2137 server.go:1289] "Started kubelet" Apr 21 10:41:33.913741 kubelet[2137]: I0421 10:41:33.913699 2137 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:41:33.913867 kubelet[2137]: I0421 10:41:33.913802 2137 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:41:33.914186 kubelet[2137]: I0421 10:41:33.914165 2137 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:41:33.917070 kubelet[2137]: I0421 10:41:33.917042 2137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:41:33.918868 kubelet[2137]: I0421 10:41:33.917239 2137 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:41:33.918868 kubelet[2137]: I0421 10:41:33.917392 2137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:41:33.918868 kubelet[2137]: E0421 10:41:33.916787 2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8592aedeb3a4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:41:33.912775245 +0000 UTC m=+0.489576277,LastTimestamp:2026-04-21 10:41:33.912775245 +0000 UTC m=+0.489576277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:41:33.918868 kubelet[2137]: E0421 10:41:33.918001 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:33.918868 kubelet[2137]: I0421 10:41:33.918018 2137 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:41:33.918868 kubelet[2137]: I0421 10:41:33.918309 2137 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:41:33.918868 kubelet[2137]: I0421 10:41:33.918349 2137 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:41:33.919198 kubelet[2137]: E0421 10:41:33.919141 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Apr 21 10:41:33.919790 kubelet[2137]: E0421 10:41:33.919702 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:41:33.920361 kubelet[2137]: E0421 10:41:33.919958 2137 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:41:33.920673 kubelet[2137]: I0421 10:41:33.920655 2137 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:41:33.920779 kubelet[2137]: I0421 10:41:33.920763 2137 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:41:33.921861 kubelet[2137]: I0421 10:41:33.921823 2137 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:41:33.930632 kubelet[2137]: I0421 10:41:33.930579 2137 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:41:33.930632 kubelet[2137]: I0421 10:41:33.930587 2137 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:41:33.930632 kubelet[2137]: I0421 10:41:33.930630 2137 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:41:33.933232 kubelet[2137]: I0421 10:41:33.933185 2137 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:41:33.934294 kubelet[2137]: I0421 10:41:33.934260 2137 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:41:33.934294 kubelet[2137]: I0421 10:41:33.934291 2137 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:41:33.934347 kubelet[2137]: I0421 10:41:33.934305 2137 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:41:33.934347 kubelet[2137]: I0421 10:41:33.934311 2137 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:41:33.934347 kubelet[2137]: E0421 10:41:33.934336 2137 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:41:33.989728 kubelet[2137]: I0421 10:41:33.989667 2137 policy_none.go:49] "None policy: Start" Apr 21 10:41:33.989728 kubelet[2137]: I0421 10:41:33.989705 2137 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:41:33.989728 kubelet[2137]: I0421 10:41:33.989720 2137 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:41:33.990532 kubelet[2137]: E0421 10:41:33.990486 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:41:33.994302 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:41:34.012008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:41:34.014712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:41:34.018494 kubelet[2137]: E0421 10:41:34.018435 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:34.027319 kubelet[2137]: E0421 10:41:34.027276 2137 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:41:34.027448 kubelet[2137]: I0421 10:41:34.027409 2137 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:41:34.027448 kubelet[2137]: I0421 10:41:34.027435 2137 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:41:34.027701 kubelet[2137]: I0421 10:41:34.027681 2137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:41:34.028823 kubelet[2137]: E0421 10:41:34.028783 2137 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:41:34.028823 kubelet[2137]: E0421 10:41:34.028810 2137 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:41:34.043002 systemd[1]: Created slice kubepods-burstable-pod849e6febcf6f38882d9143cfc81a10ca.slice - libcontainer container kubepods-burstable-pod849e6febcf6f38882d9143cfc81a10ca.slice. Apr 21 10:41:34.056337 kubelet[2137]: E0421 10:41:34.056303 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:34.058503 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 21 10:41:34.080722 kubelet[2137]: E0421 10:41:34.080686 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:34.082782 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 21 10:41:34.084054 kubelet[2137]: E0421 10:41:34.084019 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:34.120010 kubelet[2137]: E0421 10:41:34.119849 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Apr 21 10:41:34.129046 kubelet[2137]: I0421 10:41:34.129024 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:41:34.129380 kubelet[2137]: E0421 10:41:34.129314 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Apr 21 10:41:34.219759 kubelet[2137]: I0421 10:41:34.219730 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/849e6febcf6f38882d9143cfc81a10ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"849e6febcf6f38882d9143cfc81a10ca\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:34.219759 kubelet[2137]: I0421 10:41:34.219764 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:34.219759 kubelet[2137]: I0421 10:41:34.219785 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/849e6febcf6f38882d9143cfc81a10ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"849e6febcf6f38882d9143cfc81a10ca\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:34.219926 kubelet[2137]: I0421 10:41:34.219805 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/849e6febcf6f38882d9143cfc81a10ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"849e6febcf6f38882d9143cfc81a10ca\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:34.219926 kubelet[2137]: I0421 10:41:34.219883 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:34.219926 kubelet[2137]: I0421 10:41:34.219900 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:34.220006 kubelet[2137]: I0421 10:41:34.219927 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:34.220006 kubelet[2137]: I0421 10:41:34.219956 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:34.220046 kubelet[2137]: I0421 10:41:34.219980 2137 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:34.330787 kubelet[2137]: I0421 10:41:34.330760 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:41:34.331129 kubelet[2137]: E0421 10:41:34.331105 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Apr 21 10:41:34.357558 kubelet[2137]: E0421 10:41:34.357530 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:34.358149 containerd[1463]: time="2026-04-21T10:41:34.358109970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:849e6febcf6f38882d9143cfc81a10ca,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:34.381582 kubelet[2137]: E0421 10:41:34.381437 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:34.383302 containerd[1463]: time="2026-04-21T10:41:34.383154265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:34.384557 kubelet[2137]: E0421 10:41:34.384504 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:34.384897 containerd[1463]: time="2026-04-21T10:41:34.384852442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:34.520590 kubelet[2137]: E0421 10:41:34.520523 2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Apr 21 10:41:34.732885 kubelet[2137]: I0421 10:41:34.732774 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:41:34.733149 kubelet[2137]: E0421 10:41:34.733103 2137 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Apr 21 10:41:34.790999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837532887.mount: Deactivated successfully. Apr 21 10:41:34.794899 containerd[1463]: time="2026-04-21T10:41:34.794852662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:41:34.795554 containerd[1463]: time="2026-04-21T10:41:34.795521255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:41:34.797508 containerd[1463]: time="2026-04-21T10:41:34.797463985Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:41:34.798459 containerd[1463]: time="2026-04-21T10:41:34.798435370Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:41:34.799055 containerd[1463]: time="2026-04-21T10:41:34.799014751Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:41:34.799421 containerd[1463]: time="2026-04-21T10:41:34.799394563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:41:34.800044 containerd[1463]: time="2026-04-21T10:41:34.800004467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:41:34.800718 containerd[1463]: time="2026-04-21T10:41:34.800682509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:41:34.801215 containerd[1463]: time="2026-04-21T10:41:34.801167919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.982406ms" Apr 21 10:41:34.803677 containerd[1463]: time="2026-04-21T10:41:34.803601130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 418.697179ms" Apr 21 10:41:34.804138 containerd[1463]: time="2026-04-21T10:41:34.804104056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 420.881316ms" Apr 21 10:41:34.894095 containerd[1463]: time="2026-04-21T10:41:34.894016525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:41:34.894095 containerd[1463]: time="2026-04-21T10:41:34.894066995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:41:34.894292 containerd[1463]: time="2026-04-21T10:41:34.894083592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:34.894292 containerd[1463]: time="2026-04-21T10:41:34.894133156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:34.894778 containerd[1463]: time="2026-04-21T10:41:34.894712578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:41:34.894980 containerd[1463]: time="2026-04-21T10:41:34.894742753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:41:34.894980 containerd[1463]: time="2026-04-21T10:41:34.894854518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:41:34.894980 containerd[1463]: time="2026-04-21T10:41:34.894867262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:34.894980 containerd[1463]: time="2026-04-21T10:41:34.894912309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:34.895366 containerd[1463]: time="2026-04-21T10:41:34.895304797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:41:34.895402 containerd[1463]: time="2026-04-21T10:41:34.895375912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:34.897283 containerd[1463]: time="2026-04-21T10:41:34.897191674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:34.920799 systemd[1]: Started cri-containerd-2b1099e1a5fbc7977bf85584d972ab19ee99350f1dac5e870f7647bc88195842.scope - libcontainer container 2b1099e1a5fbc7977bf85584d972ab19ee99350f1dac5e870f7647bc88195842. Apr 21 10:41:34.922003 systemd[1]: Started cri-containerd-980bd7261976622c16f42846fe75955b523537b07ad89c40837332100af7196a.scope - libcontainer container 980bd7261976622c16f42846fe75955b523537b07ad89c40837332100af7196a. Apr 21 10:41:34.923378 systemd[1]: Started cri-containerd-cbed2340b1f59c4cfed1463ccfff745c573e5848f0e33fa6e66838509020644c.scope - libcontainer container cbed2340b1f59c4cfed1463ccfff745c573e5848f0e33fa6e66838509020644c. Apr 21 10:41:34.936725 kubelet[2137]: E0421 10:41:34.936476 2137 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:41:34.960643 containerd[1463]: time="2026-04-21T10:41:34.960584661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"980bd7261976622c16f42846fe75955b523537b07ad89c40837332100af7196a\"" Apr 21 10:41:34.962070 kubelet[2137]: E0421 10:41:34.962029 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:34.963579 containerd[1463]: time="2026-04-21T10:41:34.963468573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:849e6febcf6f38882d9143cfc81a10ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1099e1a5fbc7977bf85584d972ab19ee99350f1dac5e870f7647bc88195842\"" Apr 21 10:41:34.963776 containerd[1463]: time="2026-04-21T10:41:34.963739507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbed2340b1f59c4cfed1463ccfff745c573e5848f0e33fa6e66838509020644c\"" Apr 21 10:41:34.964778 kubelet[2137]: E0421 10:41:34.964179 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:34.964778 kubelet[2137]: E0421 10:41:34.964371 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:34.967094 containerd[1463]: time="2026-04-21T10:41:34.967073659Z" level=info msg="CreateContainer within sandbox \"980bd7261976622c16f42846fe75955b523537b07ad89c40837332100af7196a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:41:34.968830 containerd[1463]: time="2026-04-21T10:41:34.968805648Z" level=info msg="CreateContainer within sandbox \"2b1099e1a5fbc7977bf85584d972ab19ee99350f1dac5e870f7647bc88195842\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:41:34.970514 containerd[1463]: time="2026-04-21T10:41:34.970487662Z" level=info msg="CreateContainer within sandbox \"cbed2340b1f59c4cfed1463ccfff745c573e5848f0e33fa6e66838509020644c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:41:34.985138 containerd[1463]: time="2026-04-21T10:41:34.985041850Z" level=info msg="CreateContainer within sandbox \"980bd7261976622c16f42846fe75955b523537b07ad89c40837332100af7196a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9cf895e14311e251e1b2d2c52cefa3dcd8ae7b06fa8135e7550173e6499a734e\"" Apr 21 10:41:34.986021 containerd[1463]: time="2026-04-21T10:41:34.985982001Z" level=info msg="StartContainer for \"9cf895e14311e251e1b2d2c52cefa3dcd8ae7b06fa8135e7550173e6499a734e\"" Apr 21 10:41:34.988352 containerd[1463]: time="2026-04-21T10:41:34.988284081Z" level=info msg="CreateContainer within sandbox \"cbed2340b1f59c4cfed1463ccfff745c573e5848f0e33fa6e66838509020644c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"01b35b2725ccdf5cd8432527b998295633e8282463260f36b900b378e5653878\"" Apr 21 10:41:34.988722 containerd[1463]: time="2026-04-21T10:41:34.988701865Z" level=info msg="StartContainer for \"01b35b2725ccdf5cd8432527b998295633e8282463260f36b900b378e5653878\"" Apr 21 10:41:34.988995 containerd[1463]: time="2026-04-21T10:41:34.988968612Z" level=info msg="CreateContainer within sandbox \"2b1099e1a5fbc7977bf85584d972ab19ee99350f1dac5e870f7647bc88195842\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0230f9a440a784596592ae01aa9a46c3afdfc40817288cfde0d1965ca1bed4ef\"" Apr 21 10:41:34.989933 containerd[1463]: time="2026-04-21T10:41:34.989246961Z" level=info msg="StartContainer for \"0230f9a440a784596592ae01aa9a46c3afdfc40817288cfde0d1965ca1bed4ef\"" Apr 21 10:41:35.016817 systemd[1]: Started cri-containerd-01b35b2725ccdf5cd8432527b998295633e8282463260f36b900b378e5653878.scope - libcontainer container 01b35b2725ccdf5cd8432527b998295633e8282463260f36b900b378e5653878. Apr 21 10:41:35.017665 systemd[1]: Started cri-containerd-9cf895e14311e251e1b2d2c52cefa3dcd8ae7b06fa8135e7550173e6499a734e.scope - libcontainer container 9cf895e14311e251e1b2d2c52cefa3dcd8ae7b06fa8135e7550173e6499a734e. Apr 21 10:41:35.020298 systemd[1]: Started cri-containerd-0230f9a440a784596592ae01aa9a46c3afdfc40817288cfde0d1965ca1bed4ef.scope - libcontainer container 0230f9a440a784596592ae01aa9a46c3afdfc40817288cfde0d1965ca1bed4ef. Apr 21 10:41:35.056988 containerd[1463]: time="2026-04-21T10:41:35.056953588Z" level=info msg="StartContainer for \"9cf895e14311e251e1b2d2c52cefa3dcd8ae7b06fa8135e7550173e6499a734e\" returns successfully" Apr 21 10:41:35.057279 containerd[1463]: time="2026-04-21T10:41:35.056966465Z" level=info msg="StartContainer for \"01b35b2725ccdf5cd8432527b998295633e8282463260f36b900b378e5653878\" returns successfully" Apr 21 10:41:35.066215 containerd[1463]: time="2026-04-21T10:41:35.066088914Z" level=info msg="StartContainer for \"0230f9a440a784596592ae01aa9a46c3afdfc40817288cfde0d1965ca1bed4ef\" returns successfully" Apr 21 10:41:35.534661 kubelet[2137]: I0421 10:41:35.534580 2137 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:41:35.656771 kubelet[2137]: E0421 10:41:35.656718 2137 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:41:35.749740 kubelet[2137]: I0421 10:41:35.749659 2137 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:41:35.749740 kubelet[2137]: E0421 10:41:35.749700 2137 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:41:35.757832 kubelet[2137]: E0421 10:41:35.757802 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:35.859035 kubelet[2137]: E0421 10:41:35.858842 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:35.946417 kubelet[2137]: E0421 10:41:35.946384 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:35.946568 kubelet[2137]: E0421 10:41:35.946503 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:35.947200 kubelet[2137]: E0421 10:41:35.947182 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:35.947265 kubelet[2137]: E0421 10:41:35.947249 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:35.948462 kubelet[2137]: E0421 10:41:35.948434 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:35.948542 kubelet[2137]: E0421 10:41:35.948528 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:35.959591 kubelet[2137]: E0421 10:41:35.959545 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.060371 kubelet[2137]: E0421 10:41:36.060303 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.161438 kubelet[2137]: E0421 10:41:36.161058 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.262188 kubelet[2137]: E0421 10:41:36.262114 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.362931 kubelet[2137]: E0421 10:41:36.362839 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.463362 kubelet[2137]: E0421 10:41:36.463238 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.564076 kubelet[2137]: E0421 10:41:36.563965 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.664723 kubelet[2137]: E0421 10:41:36.664655 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.765640 kubelet[2137]: E0421 10:41:36.765467 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.866229 kubelet[2137]: E0421 10:41:36.866171 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:36.949967 kubelet[2137]: E0421 10:41:36.949926 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:36.950097 kubelet[2137]: E0421 10:41:36.950019 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:36.950168 kubelet[2137]: E0421 10:41:36.950142 2137 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:41:36.950268 kubelet[2137]: E0421 10:41:36.950249 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:36.967344 kubelet[2137]: E0421 10:41:36.967319 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:37.068276 kubelet[2137]: E0421 10:41:37.068123 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:37.169345 kubelet[2137]: E0421 10:41:37.169272 2137 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:41:37.218856 kubelet[2137]: I0421 10:41:37.218796 2137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:37.226452 kubelet[2137]: I0421 10:41:37.226386 2137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:37.229556 kubelet[2137]: I0421 10:41:37.229522 2137 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:37.862215 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-7.scope)... Apr 21 10:41:37.862236 systemd[1]: Reloading... Apr 21 10:41:37.908648 kubelet[2137]: I0421 10:41:37.907648 2137 apiserver.go:52] "Watching apiserver" Apr 21 10:41:37.914037 kubelet[2137]: E0421 10:41:37.913999 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:37.916662 zram_generator::config[2466]: No configuration found. Apr 21 10:41:37.919001 kubelet[2137]: I0421 10:41:37.918956 2137 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:41:37.951159 kubelet[2137]: E0421 10:41:37.951098 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:37.951349 kubelet[2137]: E0421 10:41:37.951329 2137 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:37.993835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:41:38.046160 systemd[1]: Reloading finished in 183 ms. Apr 21 10:41:38.075741 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:38.098768 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:41:38.098977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:38.106928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:41:38.194389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:41:38.197438 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:41:38.227234 kubelet[2511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:41:38.227234 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:41:38.227234 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:41:38.227473 kubelet[2511]: I0421 10:41:38.227257 2511 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:41:38.233303 kubelet[2511]: I0421 10:41:38.233257 2511 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:41:38.233303 kubelet[2511]: I0421 10:41:38.233283 2511 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:41:38.233432 kubelet[2511]: I0421 10:41:38.233407 2511 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:41:38.234251 kubelet[2511]: I0421 10:41:38.234230 2511 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:41:38.235795 kubelet[2511]: I0421 10:41:38.235772 2511 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:41:38.239571 kubelet[2511]: E0421 10:41:38.239333 2511 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:41:38.239571 kubelet[2511]: I0421 10:41:38.239353 2511 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:41:38.243238 kubelet[2511]: I0421 10:41:38.243179 2511 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:41:38.243382 kubelet[2511]: I0421 10:41:38.243333 2511 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:41:38.243504 kubelet[2511]: I0421 10:41:38.243379 2511 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:41:38.243504 kubelet[2511]: I0421 10:41:38.243503 2511 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:41:38.243600 kubelet[2511]: I0421 10:41:38.243510 2511 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:41:38.243600 kubelet[2511]: I0421 10:41:38.243543 2511 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:41:38.243704 kubelet[2511]: I0421 10:41:38.243692 2511 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:41:38.243720 kubelet[2511]: I0421 10:41:38.243706 2511 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:41:38.243738 kubelet[2511]: I0421 10:41:38.243729 2511 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:41:38.243756 kubelet[2511]: I0421 10:41:38.243742 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:41:38.246719 kubelet[2511]: I0421 10:41:38.246704 2511 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:41:38.247519 kubelet[2511]: I0421 10:41:38.247498 2511 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:41:38.250757 kubelet[2511]: I0421 10:41:38.250059 2511 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:41:38.250757 kubelet[2511]: I0421 10:41:38.250085 2511 server.go:1289] "Started kubelet" Apr 21 10:41:38.252491 kubelet[2511]: I0421 10:41:38.252459 2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:41:38.255261 kubelet[2511]: E0421 10:41:38.253955 2511 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:41:38.255261 kubelet[2511]: I0421 10:41:38.254363 2511 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:41:38.255261 kubelet[2511]: I0421 10:41:38.254407 2511 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:41:38.255261 kubelet[2511]: I0421 10:41:38.254458 2511 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:41:38.255261 kubelet[2511]: I0421 10:41:38.254726 2511 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:41:38.255261 kubelet[2511]: I0421 10:41:38.255066 2511 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:41:38.256995 kubelet[2511]: I0421 10:41:38.256350 2511 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:41:38.256995 kubelet[2511]: I0421 10:41:38.256537 2511 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:41:38.259961 kubelet[2511]: I0421 10:41:38.259431 2511 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:41:38.259961 kubelet[2511]: I0421 10:41:38.259448 2511 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:41:38.263573 kubelet[2511]: I0421 10:41:38.263479 2511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:41:38.263786 kubelet[2511]: I0421 10:41:38.263715 2511 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:41:38.270881 kubelet[2511]: I0421 10:41:38.270823 2511 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:41:38.272243 kubelet[2511]: I0421 10:41:38.272222 2511 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:41:38.272406 kubelet[2511]: I0421 10:41:38.272252 2511 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:41:38.272406 kubelet[2511]: I0421 10:41:38.272265 2511 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:41:38.272406 kubelet[2511]: I0421 10:41:38.272271 2511 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:41:38.272406 kubelet[2511]: E0421 10:41:38.272306 2511 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:41:38.287313 kubelet[2511]: I0421 10:41:38.287284 2511 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:41:38.287313 kubelet[2511]: I0421 10:41:38.287301 2511 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:41:38.287313 kubelet[2511]: I0421 10:41:38.287315 2511 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:41:38.287443 kubelet[2511]: I0421 10:41:38.287404 2511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:41:38.287443 kubelet[2511]: I0421 10:41:38.287410 2511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:41:38.287443 kubelet[2511]: I0421 10:41:38.287421 2511 policy_none.go:49] "None policy: Start" Apr 21 10:41:38.287443 kubelet[2511]: I0421 10:41:38.287428 2511 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:41:38.287443 kubelet[2511]: I0421 10:41:38.287434 2511 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:41:38.287527 kubelet[2511]: I0421 10:41:38.287492 2511 state_mem.go:75] "Updated machine memory state" Apr 21 10:41:38.291041 kubelet[2511]: E0421 10:41:38.291012 2511 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:41:38.291210 kubelet[2511]: I0421 10:41:38.291155 2511 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:41:38.291210 kubelet[2511]: I0421 10:41:38.291170 2511 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:41:38.291349 kubelet[2511]: I0421 10:41:38.291310 2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:41:38.292245 kubelet[2511]: E0421 10:41:38.292162 2511 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:41:38.373383 kubelet[2511]: I0421 10:41:38.373332 2511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:38.373383 kubelet[2511]: I0421 10:41:38.373400 2511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.373383 kubelet[2511]: I0421 10:41:38.373422 2511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:38.380105 kubelet[2511]: E0421 10:41:38.380043 2511 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:38.380105 kubelet[2511]: E0421 10:41:38.380074 2511 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:38.380706 kubelet[2511]: E0421 10:41:38.380679 2511 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.397837 kubelet[2511]: I0421 10:41:38.397817 2511 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:41:38.403496 kubelet[2511]: I0421 10:41:38.403445 2511 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 10:41:38.403653 kubelet[2511]: I0421 10:41:38.403546 2511 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:41:38.556024 kubelet[2511]: I0421 10:41:38.555863 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/849e6febcf6f38882d9143cfc81a10ca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"849e6febcf6f38882d9143cfc81a10ca\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:38.556024 kubelet[2511]: I0421 10:41:38.555905 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.556024 kubelet[2511]: I0421 10:41:38.555927 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.556024 kubelet[2511]: I0421 10:41:38.555943 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.556024 kubelet[2511]: I0421 10:41:38.555960 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:38.556298 kubelet[2511]: I0421 10:41:38.555972 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/849e6febcf6f38882d9143cfc81a10ca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"849e6febcf6f38882d9143cfc81a10ca\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:38.556298 kubelet[2511]: I0421 10:41:38.555986 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/849e6febcf6f38882d9143cfc81a10ca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"849e6febcf6f38882d9143cfc81a10ca\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:38.556298 kubelet[2511]: I0421 10:41:38.555998 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.556298 kubelet[2511]: I0421 10:41:38.556010 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:41:38.680658 kubelet[2511]: E0421 10:41:38.680562 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:38.680848 kubelet[2511]: E0421 10:41:38.680723 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:38.680881 kubelet[2511]: E0421 10:41:38.680848 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:38.863108 sudo[2550]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 10:41:38.863339 sudo[2550]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 10:41:39.244606 kubelet[2511]: I0421 10:41:39.244441 2511 apiserver.go:52] "Watching apiserver" Apr 21 10:41:39.254886 kubelet[2511]: I0421 10:41:39.254817 2511 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:41:39.279689 kubelet[2511]: I0421 10:41:39.279663 2511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:39.280200 kubelet[2511]: E0421 10:41:39.279984 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:39.280200 kubelet[2511]: I0421 10:41:39.280062 2511 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:39.285368 kubelet[2511]: E0421 10:41:39.285245 2511 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:41:39.285487 kubelet[2511]: E0421 10:41:39.285458 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:39.289165 kubelet[2511]: E0421 10:41:39.289145 2511 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:41:39.289491 kubelet[2511]: E0421 10:41:39.289410 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:39.298199 kubelet[2511]: I0421 10:41:39.298126 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.298115622 podStartE2EDuration="2.298115622s" podCreationTimestamp="2026-04-21 10:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:41:39.293088275 +0000 UTC m=+1.092620531" watchObservedRunningTime="2026-04-21 10:41:39.298115622 +0000 UTC m=+1.097647888" Apr 21 10:41:39.303580 kubelet[2511]: I0421 10:41:39.303336 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.303327502 podStartE2EDuration="2.303327502s" podCreationTimestamp="2026-04-21 10:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:41:39.298247822 +0000 UTC m=+1.097780076" watchObservedRunningTime="2026-04-21 10:41:39.303327502 +0000 UTC m=+1.102859756" Apr 21 10:41:39.310199 kubelet[2511]: I0421 10:41:39.310154 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.310143487 podStartE2EDuration="2.310143487s" podCreationTimestamp="2026-04-21 10:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:41:39.303655728 +0000 UTC m=+1.103187970" watchObservedRunningTime="2026-04-21 10:41:39.310143487 +0000 UTC m=+1.109675740" Apr 21 10:41:39.311754 sudo[2550]: pam_unix(sudo:session): session closed for user root Apr 21 10:41:40.281273 kubelet[2511]: E0421 10:41:40.281216 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:40.281757 kubelet[2511]: E0421 10:41:40.281739 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:40.464637 sudo[1638]: pam_unix(sudo:session): session closed for user root Apr 21 10:41:40.466123 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 21 10:41:40.468648 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:56830.service: Deactivated successfully. Apr 21 10:41:40.469806 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:41:40.469937 systemd[1]: session-7.scope: Consumed 3.794s CPU time, 160.2M memory peak, 0B memory swap peak. Apr 21 10:41:40.470351 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:41:40.470999 systemd-logind[1446]: Removed session 7. Apr 21 10:41:43.729120 kubelet[2511]: E0421 10:41:43.729073 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:44.131456 kubelet[2511]: E0421 10:41:44.131299 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:44.438695 kubelet[2511]: I0421 10:41:44.438495 2511 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:41:44.438995 containerd[1463]: time="2026-04-21T10:41:44.438959459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:41:44.439344 kubelet[2511]: I0421 10:41:44.439187 2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:41:45.164422 kubelet[2511]: E0421 10:41:45.164376 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.287949 kubelet[2511]: E0421 10:41:45.287717 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.528796 systemd[1]: Created slice kubepods-besteffort-podd6533459_8d1e_4412_96d9_55318ad412a6.slice - libcontainer container kubepods-besteffort-podd6533459_8d1e_4412_96d9_55318ad412a6.slice. Apr 21 10:41:45.549062 systemd[1]: Created slice kubepods-burstable-pod110b2f98_bbca_4886_ab15_a251c87179b0.slice - libcontainer container kubepods-burstable-pod110b2f98_bbca_4886_ab15_a251c87179b0.slice. Apr 21 10:41:45.598469 kubelet[2511]: I0421 10:41:45.598402 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6533459-8d1e-4412-96d9-55318ad412a6-kube-proxy\") pod \"kube-proxy-dj5dt\" (UID: \"d6533459-8d1e-4412-96d9-55318ad412a6\") " pod="kube-system/kube-proxy-dj5dt" Apr 21 10:41:45.598469 kubelet[2511]: I0421 10:41:45.598435 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqz89\" (UniqueName: \"kubernetes.io/projected/d6533459-8d1e-4412-96d9-55318ad412a6-kube-api-access-gqz89\") pod \"kube-proxy-dj5dt\" (UID: \"d6533459-8d1e-4412-96d9-55318ad412a6\") " pod="kube-system/kube-proxy-dj5dt" Apr 21 10:41:45.598469 kubelet[2511]: I0421 10:41:45.598449 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cni-path\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598469 kubelet[2511]: I0421 10:41:45.598462 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-etc-cni-netd\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598469 kubelet[2511]: I0421 10:41:45.598475 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-xtables-lock\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598469 kubelet[2511]: I0421 10:41:45.598487 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6533459-8d1e-4412-96d9-55318ad412a6-lib-modules\") pod \"kube-proxy-dj5dt\" (UID: \"d6533459-8d1e-4412-96d9-55318ad412a6\") " pod="kube-system/kube-proxy-dj5dt" Apr 21 10:41:45.598740 kubelet[2511]: I0421 10:41:45.598497 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-run\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598740 kubelet[2511]: I0421 10:41:45.598508 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-config-path\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598740 kubelet[2511]: I0421 10:41:45.598518 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-hubble-tls\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598740 kubelet[2511]: I0421 10:41:45.598528 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-hostproc\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598740 kubelet[2511]: I0421 10:41:45.598538 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-cgroup\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598740 kubelet[2511]: I0421 10:41:45.598549 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/110b2f98-bbca-4886-ab15-a251c87179b0-clustermesh-secrets\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598835 kubelet[2511]: I0421 10:41:45.598562 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6533459-8d1e-4412-96d9-55318ad412a6-xtables-lock\") pod \"kube-proxy-dj5dt\" (UID: \"d6533459-8d1e-4412-96d9-55318ad412a6\") " pod="kube-system/kube-proxy-dj5dt" Apr 21 10:41:45.598835 kubelet[2511]: I0421 10:41:45.598572 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-bpf-maps\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598835 kubelet[2511]: I0421 10:41:45.598583 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-lib-modules\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598835 kubelet[2511]: I0421 10:41:45.598596 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-net\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598835 kubelet[2511]: I0421 10:41:45.598628 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-kernel\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.598913 kubelet[2511]: I0421 10:41:45.598641 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82sk\" (UniqueName: \"kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-kube-api-access-x82sk\") pod \"cilium-wqbv7\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " pod="kube-system/cilium-wqbv7" Apr 21 10:41:45.616754 systemd[1]: Created slice kubepods-besteffort-pod4aec8b95_943d_49cb_93f5_673d3c8bc120.slice - libcontainer container kubepods-besteffort-pod4aec8b95_943d_49cb_93f5_673d3c8bc120.slice. Apr 21 10:41:45.699455 kubelet[2511]: I0421 10:41:45.699290 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aec8b95-943d-49cb-93f5-673d3c8bc120-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-n4jxx\" (UID: \"4aec8b95-943d-49cb-93f5-673d3c8bc120\") " pod="kube-system/cilium-operator-6c4d7847fc-n4jxx" Apr 21 10:41:45.699455 kubelet[2511]: I0421 10:41:45.699327 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxkxs\" (UniqueName: \"kubernetes.io/projected/4aec8b95-943d-49cb-93f5-673d3c8bc120-kube-api-access-hxkxs\") pod \"cilium-operator-6c4d7847fc-n4jxx\" (UID: \"4aec8b95-943d-49cb-93f5-673d3c8bc120\") " pod="kube-system/cilium-operator-6c4d7847fc-n4jxx" Apr 21 10:41:45.848176 kubelet[2511]: E0421 10:41:45.847995 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.849065 containerd[1463]: time="2026-04-21T10:41:45.848731546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dj5dt,Uid:d6533459-8d1e-4412-96d9-55318ad412a6,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:45.850824 kubelet[2511]: E0421 10:41:45.850781 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.851402 containerd[1463]: time="2026-04-21T10:41:45.851333491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqbv7,Uid:110b2f98-bbca-4886-ab15-a251c87179b0,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:45.877855 containerd[1463]: time="2026-04-21T10:41:45.877654284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:41:45.877855 containerd[1463]: time="2026-04-21T10:41:45.877698541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:41:45.877855 containerd[1463]: time="2026-04-21T10:41:45.877711490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:45.877855 containerd[1463]: time="2026-04-21T10:41:45.877764022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:45.881849 containerd[1463]: time="2026-04-21T10:41:45.881666882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:41:45.881995 containerd[1463]: time="2026-04-21T10:41:45.881829738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:41:45.881995 containerd[1463]: time="2026-04-21T10:41:45.881857327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:45.881995 containerd[1463]: time="2026-04-21T10:41:45.881925261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:45.892773 systemd[1]: Started cri-containerd-ef7df4c6f09bb8e7cfcb602be2de87ced476e6c5935443c549c2c0bed2e74e90.scope - libcontainer container ef7df4c6f09bb8e7cfcb602be2de87ced476e6c5935443c549c2c0bed2e74e90. Apr 21 10:41:45.896870 systemd[1]: Started cri-containerd-824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454.scope - libcontainer container 824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454. Apr 21 10:41:45.913724 containerd[1463]: time="2026-04-21T10:41:45.913680471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dj5dt,Uid:d6533459-8d1e-4412-96d9-55318ad412a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef7df4c6f09bb8e7cfcb602be2de87ced476e6c5935443c549c2c0bed2e74e90\"" Apr 21 10:41:45.914637 kubelet[2511]: E0421 10:41:45.914594 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.916760 containerd[1463]: time="2026-04-21T10:41:45.916716349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqbv7,Uid:110b2f98-bbca-4886-ab15-a251c87179b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\"" Apr 21 10:41:45.917796 kubelet[2511]: E0421 10:41:45.917734 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.918722 containerd[1463]: time="2026-04-21T10:41:45.918599890Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 10:41:45.921597 containerd[1463]: time="2026-04-21T10:41:45.921564688Z" level=info msg="CreateContainer within sandbox \"ef7df4c6f09bb8e7cfcb602be2de87ced476e6c5935443c549c2c0bed2e74e90\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:41:45.921687 kubelet[2511]: E0421 10:41:45.921670 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:45.921979 containerd[1463]: time="2026-04-21T10:41:45.921937511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n4jxx,Uid:4aec8b95-943d-49cb-93f5-673d3c8bc120,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:45.952188 containerd[1463]: time="2026-04-21T10:41:45.952123377Z" level=info msg="CreateContainer within sandbox \"ef7df4c6f09bb8e7cfcb602be2de87ced476e6c5935443c549c2c0bed2e74e90\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39060259cbb99092d2a91f33df7a78a23196f26a591f9211bc3b62cb5f432b78\"" Apr 21 10:41:45.953999 containerd[1463]: time="2026-04-21T10:41:45.952928435Z" level=info msg="StartContainer for \"39060259cbb99092d2a91f33df7a78a23196f26a591f9211bc3b62cb5f432b78\"" Apr 21 10:41:45.961694 containerd[1463]: time="2026-04-21T10:41:45.960535274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:41:45.961694 containerd[1463]: time="2026-04-21T10:41:45.961472047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:41:45.961694 containerd[1463]: time="2026-04-21T10:41:45.961483910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:45.961694 containerd[1463]: time="2026-04-21T10:41:45.961556696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:41:45.978818 systemd[1]: Started cri-containerd-7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0.scope - libcontainer container 7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0. Apr 21 10:41:45.981167 systemd[1]: Started cri-containerd-39060259cbb99092d2a91f33df7a78a23196f26a591f9211bc3b62cb5f432b78.scope - libcontainer container 39060259cbb99092d2a91f33df7a78a23196f26a591f9211bc3b62cb5f432b78. Apr 21 10:41:46.003152 containerd[1463]: time="2026-04-21T10:41:46.003114525Z" level=info msg="StartContainer for \"39060259cbb99092d2a91f33df7a78a23196f26a591f9211bc3b62cb5f432b78\" returns successfully" Apr 21 10:41:46.012650 containerd[1463]: time="2026-04-21T10:41:46.012583830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n4jxx,Uid:4aec8b95-943d-49cb-93f5-673d3c8bc120,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\"" Apr 21 10:41:46.014269 kubelet[2511]: E0421 10:41:46.014248 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:46.292653 kubelet[2511]: E0421 10:41:46.292481 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:53.733844 kubelet[2511]: E0421 10:41:53.733568 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:53.744458 kubelet[2511]: I0421 10:41:53.744419 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dj5dt" podStartSLOduration=8.744406958999999 podStartE2EDuration="8.744406959s" podCreationTimestamp="2026-04-21 10:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:41:46.302421068 +0000 UTC m=+8.101953321" watchObservedRunningTime="2026-04-21 10:41:53.744406959 +0000 UTC m=+15.543939262" Apr 21 10:41:54.136601 kubelet[2511]: E0421 10:41:54.135018 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:54.306009 kubelet[2511]: E0421 10:41:54.305952 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:54.505299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510186082.mount: Deactivated successfully. Apr 21 10:41:55.670287 containerd[1463]: time="2026-04-21T10:41:55.670227821Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:55.670667 containerd[1463]: time="2026-04-21T10:41:55.670566507Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 10:41:55.671323 containerd[1463]: time="2026-04-21T10:41:55.671283706Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:55.672508 containerd[1463]: time="2026-04-21T10:41:55.672473521Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.753750825s" Apr 21 10:41:55.672540 containerd[1463]: time="2026-04-21T10:41:55.672506902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 10:41:55.674021 containerd[1463]: time="2026-04-21T10:41:55.673997546Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 10:41:55.676648 containerd[1463]: time="2026-04-21T10:41:55.676576954Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:41:55.688967 containerd[1463]: time="2026-04-21T10:41:55.688900648Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\"" Apr 21 10:41:55.690237 containerd[1463]: time="2026-04-21T10:41:55.689462358Z" level=info msg="StartContainer for \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\"" Apr 21 10:41:55.712783 systemd[1]: Started cri-containerd-84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f.scope - libcontainer container 84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f. Apr 21 10:41:55.731141 containerd[1463]: time="2026-04-21T10:41:55.731096173Z" level=info msg="StartContainer for \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\" returns successfully" Apr 21 10:41:55.738184 systemd[1]: cri-containerd-84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f.scope: Deactivated successfully. Apr 21 10:41:55.844383 containerd[1463]: time="2026-04-21T10:41:55.844326078Z" level=info msg="shim disconnected" id=84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f namespace=k8s.io Apr 21 10:41:55.844383 containerd[1463]: time="2026-04-21T10:41:55.844375816Z" level=warning msg="cleaning up after shim disconnected" id=84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f namespace=k8s.io Apr 21 10:41:55.844383 containerd[1463]: time="2026-04-21T10:41:55.844383038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:41:56.314476 kubelet[2511]: E0421 10:41:56.314440 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:56.321271 containerd[1463]: time="2026-04-21T10:41:56.321230060Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:41:56.335921 containerd[1463]: time="2026-04-21T10:41:56.335866730Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\"" Apr 21 10:41:56.336425 containerd[1463]: time="2026-04-21T10:41:56.336393186Z" level=info msg="StartContainer for \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\"" Apr 21 10:41:56.358824 systemd[1]: Started cri-containerd-70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967.scope - libcontainer container 70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967. Apr 21 10:41:56.378177 containerd[1463]: time="2026-04-21T10:41:56.378131582Z" level=info msg="StartContainer for \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\" returns successfully" Apr 21 10:41:56.389175 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:41:56.389456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:41:56.389525 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:41:56.396075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:41:56.396270 systemd[1]: cri-containerd-70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967.scope: Deactivated successfully. Apr 21 10:41:56.413544 containerd[1463]: time="2026-04-21T10:41:56.413455799Z" level=info msg="shim disconnected" id=70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967 namespace=k8s.io Apr 21 10:41:56.413544 containerd[1463]: time="2026-04-21T10:41:56.413502972Z" level=warning msg="cleaning up after shim disconnected" id=70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967 namespace=k8s.io Apr 21 10:41:56.413544 containerd[1463]: time="2026-04-21T10:41:56.413510739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:41:56.417124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:41:56.684518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f-rootfs.mount: Deactivated successfully. Apr 21 10:41:57.103651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083073352.mount: Deactivated successfully. Apr 21 10:41:57.317923 kubelet[2511]: E0421 10:41:57.317869 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:57.321786 containerd[1463]: time="2026-04-21T10:41:57.321750545Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:41:57.340459 containerd[1463]: time="2026-04-21T10:41:57.340396391Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\"" Apr 21 10:41:57.341319 containerd[1463]: time="2026-04-21T10:41:57.341192790Z" level=info msg="StartContainer for \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\"" Apr 21 10:41:57.376865 systemd[1]: Started cri-containerd-6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f.scope - libcontainer container 6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f. Apr 21 10:41:57.404596 containerd[1463]: time="2026-04-21T10:41:57.404539336Z" level=info msg="StartContainer for \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\" returns successfully" Apr 21 10:41:57.404863 systemd[1]: cri-containerd-6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f.scope: Deactivated successfully. Apr 21 10:41:57.419041 containerd[1463]: time="2026-04-21T10:41:57.419010051Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:57.419883 containerd[1463]: time="2026-04-21T10:41:57.419844979Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 10:41:57.424760 containerd[1463]: time="2026-04-21T10:41:57.424703867Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:41:57.425884 containerd[1463]: time="2026-04-21T10:41:57.425842245Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.751816254s" Apr 21 10:41:57.425884 containerd[1463]: time="2026-04-21T10:41:57.425880864Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 10:41:57.426016 containerd[1463]: time="2026-04-21T10:41:57.425986597Z" level=info msg="shim disconnected" id=6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f namespace=k8s.io Apr 21 10:41:57.426060 containerd[1463]: time="2026-04-21T10:41:57.426017687Z" level=warning msg="cleaning up after shim disconnected" id=6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f namespace=k8s.io Apr 21 10:41:57.426060 containerd[1463]: time="2026-04-21T10:41:57.426023933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:41:57.429996 containerd[1463]: time="2026-04-21T10:41:57.429932021Z" level=info msg="CreateContainer within sandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 10:41:57.439160 containerd[1463]: time="2026-04-21T10:41:57.439121226Z" level=info msg="CreateContainer within sandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\"" Apr 21 10:41:57.439661 containerd[1463]: time="2026-04-21T10:41:57.439605589Z" level=info msg="StartContainer for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\"" Apr 21 10:41:57.463759 systemd[1]: Started cri-containerd-da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623.scope - libcontainer container da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623. Apr 21 10:41:57.481424 containerd[1463]: time="2026-04-21T10:41:57.481363620Z" level=info msg="StartContainer for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" returns successfully" Apr 21 10:41:58.320275 kubelet[2511]: E0421 10:41:58.320105 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:58.321943 kubelet[2511]: E0421 10:41:58.321917 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:58.328482 containerd[1463]: time="2026-04-21T10:41:58.328446623Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:41:58.348186 kubelet[2511]: I0421 10:41:58.348051 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-n4jxx" podStartSLOduration=1.9366116039999999 podStartE2EDuration="13.348038998s" podCreationTimestamp="2026-04-21 10:41:45 +0000 UTC" firstStartedPulling="2026-04-21 10:41:46.015055726 +0000 UTC m=+7.814587969" lastFinishedPulling="2026-04-21 10:41:57.426483114 +0000 UTC m=+19.226015363" observedRunningTime="2026-04-21 10:41:58.333285173 +0000 UTC m=+20.132817425" watchObservedRunningTime="2026-04-21 10:41:58.348038998 +0000 UTC m=+20.147571251" Apr 21 10:41:58.370716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108936563.mount: Deactivated successfully. Apr 21 10:41:58.380598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409334508.mount: Deactivated successfully. Apr 21 10:41:58.383616 containerd[1463]: time="2026-04-21T10:41:58.383572196Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\"" Apr 21 10:41:58.384411 containerd[1463]: time="2026-04-21T10:41:58.384025451Z" level=info msg="StartContainer for \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\"" Apr 21 10:41:58.446578 systemd[1]: Started cri-containerd-cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a.scope - libcontainer container cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a. Apr 21 10:41:58.462514 systemd[1]: cri-containerd-cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a.scope: Deactivated successfully. Apr 21 10:41:58.465428 containerd[1463]: time="2026-04-21T10:41:58.465397465Z" level=info msg="StartContainer for \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\" returns successfully" Apr 21 10:41:58.481473 containerd[1463]: time="2026-04-21T10:41:58.481410556Z" level=info msg="shim disconnected" id=cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a namespace=k8s.io Apr 21 10:41:58.481473 containerd[1463]: time="2026-04-21T10:41:58.481466661Z" level=warning msg="cleaning up after shim disconnected" id=cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a namespace=k8s.io Apr 21 10:41:58.481473 containerd[1463]: time="2026-04-21T10:41:58.481477466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:41:59.330470 kubelet[2511]: E0421 10:41:59.330415 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:59.330839 kubelet[2511]: E0421 10:41:59.330569 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:59.335827 containerd[1463]: time="2026-04-21T10:41:59.335756333Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:41:59.350973 containerd[1463]: time="2026-04-21T10:41:59.350921431Z" level=info msg="CreateContainer within sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\"" Apr 21 10:41:59.351449 containerd[1463]: time="2026-04-21T10:41:59.351412663Z" level=info msg="StartContainer for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\"" Apr 21 10:41:59.381802 systemd[1]: Started cri-containerd-df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875.scope - libcontainer container df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875. Apr 21 10:41:59.406407 containerd[1463]: time="2026-04-21T10:41:59.406358954Z" level=info msg="StartContainer for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" returns successfully" Apr 21 10:41:59.495020 kubelet[2511]: I0421 10:41:59.494975 2511 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:41:59.528373 systemd[1]: Created slice kubepods-burstable-podf558bb34_1c7d_4d2c_8061_bf6a0c366bc9.slice - libcontainer container kubepods-burstable-podf558bb34_1c7d_4d2c_8061_bf6a0c366bc9.slice. Apr 21 10:41:59.535468 systemd[1]: Created slice kubepods-burstable-podd84ddbe6_8e23_4f0d_a24d_a6d88c04807b.slice - libcontainer container kubepods-burstable-podd84ddbe6_8e23_4f0d_a24d_a6d88c04807b.slice. Apr 21 10:41:59.592921 kubelet[2511]: I0421 10:41:59.592487 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm26r\" (UniqueName: \"kubernetes.io/projected/f558bb34-1c7d-4d2c-8061-bf6a0c366bc9-kube-api-access-wm26r\") pod \"coredns-674b8bbfcf-9q5ds\" (UID: \"f558bb34-1c7d-4d2c-8061-bf6a0c366bc9\") " pod="kube-system/coredns-674b8bbfcf-9q5ds" Apr 21 10:41:59.592921 kubelet[2511]: I0421 10:41:59.592534 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp48k\" (UniqueName: \"kubernetes.io/projected/d84ddbe6-8e23-4f0d-a24d-a6d88c04807b-kube-api-access-pp48k\") pod \"coredns-674b8bbfcf-2ccrh\" (UID: \"d84ddbe6-8e23-4f0d-a24d-a6d88c04807b\") " pod="kube-system/coredns-674b8bbfcf-2ccrh" Apr 21 10:41:59.592921 kubelet[2511]: I0421 10:41:59.592547 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d84ddbe6-8e23-4f0d-a24d-a6d88c04807b-config-volume\") pod \"coredns-674b8bbfcf-2ccrh\" (UID: \"d84ddbe6-8e23-4f0d-a24d-a6d88c04807b\") " pod="kube-system/coredns-674b8bbfcf-2ccrh" Apr 21 10:41:59.592921 kubelet[2511]: I0421 10:41:59.592564 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f558bb34-1c7d-4d2c-8061-bf6a0c366bc9-config-volume\") pod \"coredns-674b8bbfcf-9q5ds\" (UID: \"f558bb34-1c7d-4d2c-8061-bf6a0c366bc9\") " pod="kube-system/coredns-674b8bbfcf-9q5ds" Apr 21 10:41:59.832069 kubelet[2511]: E0421 10:41:59.832023 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:59.834699 containerd[1463]: time="2026-04-21T10:41:59.834601537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9q5ds,Uid:f558bb34-1c7d-4d2c-8061-bf6a0c366bc9,Namespace:kube-system,Attempt:0,}" Apr 21 10:41:59.837469 kubelet[2511]: E0421 10:41:59.837426 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:41:59.837929 containerd[1463]: time="2026-04-21T10:41:59.837879633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2ccrh,Uid:d84ddbe6-8e23-4f0d-a24d-a6d88c04807b,Namespace:kube-system,Attempt:0,}" Apr 21 10:42:00.334100 kubelet[2511]: E0421 10:42:00.334065 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:00.356863 kubelet[2511]: I0421 10:42:00.356430 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqbv7" podStartSLOduration=5.601473229 podStartE2EDuration="15.356412079s" podCreationTimestamp="2026-04-21 10:41:45 +0000 UTC" firstStartedPulling="2026-04-21 10:41:45.918314951 +0000 UTC m=+7.717847193" lastFinishedPulling="2026-04-21 10:41:55.6732538 +0000 UTC m=+17.472786043" observedRunningTime="2026-04-21 10:42:00.356057978 +0000 UTC m=+22.155590227" watchObservedRunningTime="2026-04-21 10:42:00.356412079 +0000 UTC m=+22.155944333" Apr 21 10:42:01.293862 systemd-networkd[1386]: cilium_host: Link UP Apr 21 10:42:01.294751 systemd-networkd[1386]: cilium_net: Link UP Apr 21 10:42:01.294912 systemd-networkd[1386]: cilium_net: Gained carrier Apr 21 10:42:01.295030 systemd-networkd[1386]: cilium_host: Gained carrier Apr 21 10:42:01.335990 kubelet[2511]: E0421 10:42:01.335964 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:01.368033 systemd-networkd[1386]: cilium_vxlan: Link UP Apr 21 10:42:01.368044 systemd-networkd[1386]: cilium_vxlan: Gained carrier Apr 21 10:42:01.544655 kernel: NET: Registered PF_ALG protocol family Apr 21 10:42:01.616256 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:41904.service - OpenSSH per-connection server daemon (10.0.0.1:41904). Apr 21 10:42:01.648726 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 41904 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:01.650127 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:01.656383 systemd-logind[1446]: New session 8 of user core. Apr 21 10:42:01.660843 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:42:01.678928 update_engine[1448]: I20260421 10:42:01.678853 1448 update_attempter.cc:509] Updating boot flags... Apr 21 10:42:01.698663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3370) Apr 21 10:42:01.721137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3370) Apr 21 10:42:01.805242 sshd[3470]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:01.807899 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:41904.service: Deactivated successfully. Apr 21 10:42:01.809262 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:42:01.810298 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:42:01.811377 systemd-logind[1446]: Removed session 8. Apr 21 10:42:01.949770 systemd-networkd[1386]: cilium_net: Gained IPv6LL Apr 21 10:42:01.949985 systemd-networkd[1386]: cilium_host: Gained IPv6LL Apr 21 10:42:02.079386 systemd-networkd[1386]: lxc_health: Link UP Apr 21 10:42:02.079827 systemd-networkd[1386]: lxc_health: Gained carrier Apr 21 10:42:02.337731 kubelet[2511]: E0421 10:42:02.337703 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:02.396754 systemd-networkd[1386]: lxc618a9800e2b0: Link UP Apr 21 10:42:02.404733 kernel: eth0: renamed from tmpeb87f Apr 21 10:42:02.415682 systemd-networkd[1386]: lxc618a9800e2b0: Gained carrier Apr 21 10:42:02.418424 systemd-networkd[1386]: lxcd2957297518c: Link UP Apr 21 10:42:02.430183 kernel: eth0: renamed from tmpaf337 Apr 21 10:42:02.439009 systemd-networkd[1386]: lxcd2957297518c: Gained carrier Apr 21 10:42:02.652810 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Apr 21 10:42:03.487710 systemd-networkd[1386]: lxc_health: Gained IPv6LL Apr 21 10:42:03.487986 systemd-networkd[1386]: lxcd2957297518c: Gained IPv6LL Apr 21 10:42:03.852488 kubelet[2511]: E0421 10:42:03.852436 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:04.189938 systemd-networkd[1386]: lxc618a9800e2b0: Gained IPv6LL Apr 21 10:42:04.353039 kubelet[2511]: E0421 10:42:04.352963 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:05.356258 kubelet[2511]: E0421 10:42:05.356111 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:05.392108 containerd[1463]: time="2026-04-21T10:42:05.391974645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:42:05.392108 containerd[1463]: time="2026-04-21T10:42:05.392029662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:42:05.392108 containerd[1463]: time="2026-04-21T10:42:05.392040979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:42:05.392492 containerd[1463]: time="2026-04-21T10:42:05.392115004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:42:05.397845 containerd[1463]: time="2026-04-21T10:42:05.397782339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:42:05.397845 containerd[1463]: time="2026-04-21T10:42:05.397824004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:42:05.397845 containerd[1463]: time="2026-04-21T10:42:05.397832925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:42:05.397963 containerd[1463]: time="2026-04-21T10:42:05.397888629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:42:05.420811 systemd[1]: Started cri-containerd-af3378d434e1cb2b03b2e073bcf2bfa440888deda319f9f346c3224806228f1c.scope - libcontainer container af3378d434e1cb2b03b2e073bcf2bfa440888deda319f9f346c3224806228f1c. Apr 21 10:42:05.422009 systemd[1]: Started cri-containerd-eb87f777eff8821996eb172f19339b3b4026067c0c51c12f7f3cbf0ef1fd65d8.scope - libcontainer container eb87f777eff8821996eb172f19339b3b4026067c0c51c12f7f3cbf0ef1fd65d8. Apr 21 10:42:05.429180 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:42:05.430507 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:42:05.455072 containerd[1463]: time="2026-04-21T10:42:05.455043025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9q5ds,Uid:f558bb34-1c7d-4d2c-8061-bf6a0c366bc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb87f777eff8821996eb172f19339b3b4026067c0c51c12f7f3cbf0ef1fd65d8\"" Apr 21 10:42:05.455502 containerd[1463]: time="2026-04-21T10:42:05.455471864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2ccrh,Uid:d84ddbe6-8e23-4f0d-a24d-a6d88c04807b,Namespace:kube-system,Attempt:0,} returns sandbox id \"af3378d434e1cb2b03b2e073bcf2bfa440888deda319f9f346c3224806228f1c\"" Apr 21 10:42:05.456316 kubelet[2511]: E0421 10:42:05.456272 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:05.456368 kubelet[2511]: E0421 10:42:05.456315 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:05.477054 containerd[1463]: time="2026-04-21T10:42:05.477008919Z" level=info msg="CreateContainer within sandbox \"af3378d434e1cb2b03b2e073bcf2bfa440888deda319f9f346c3224806228f1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:42:05.477195 containerd[1463]: time="2026-04-21T10:42:05.477037928Z" level=info msg="CreateContainer within sandbox \"eb87f777eff8821996eb172f19339b3b4026067c0c51c12f7f3cbf0ef1fd65d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:42:05.496651 containerd[1463]: time="2026-04-21T10:42:05.496584388Z" level=info msg="CreateContainer within sandbox \"eb87f777eff8821996eb172f19339b3b4026067c0c51c12f7f3cbf0ef1fd65d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ea4b32bce320a0782528ba55e09c5c66ab8523d3599123eec429066290e7d5e\"" Apr 21 10:42:05.497344 containerd[1463]: time="2026-04-21T10:42:05.497321029Z" level=info msg="StartContainer for \"9ea4b32bce320a0782528ba55e09c5c66ab8523d3599123eec429066290e7d5e\"" Apr 21 10:42:05.499467 containerd[1463]: time="2026-04-21T10:42:05.499023322Z" level=info msg="CreateContainer within sandbox \"af3378d434e1cb2b03b2e073bcf2bfa440888deda319f9f346c3224806228f1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c03ab31793ab80d7937fce3870f4522561ab32b2badfc9dd1181c4e4d29974f1\"" Apr 21 10:42:05.500535 containerd[1463]: time="2026-04-21T10:42:05.499805158Z" level=info msg="StartContainer for \"c03ab31793ab80d7937fce3870f4522561ab32b2badfc9dd1181c4e4d29974f1\"" Apr 21 10:42:05.526865 systemd[1]: Started cri-containerd-9ea4b32bce320a0782528ba55e09c5c66ab8523d3599123eec429066290e7d5e.scope - libcontainer container 9ea4b32bce320a0782528ba55e09c5c66ab8523d3599123eec429066290e7d5e. Apr 21 10:42:05.528052 systemd[1]: Started cri-containerd-c03ab31793ab80d7937fce3870f4522561ab32b2badfc9dd1181c4e4d29974f1.scope - libcontainer container c03ab31793ab80d7937fce3870f4522561ab32b2badfc9dd1181c4e4d29974f1. Apr 21 10:42:05.551066 containerd[1463]: time="2026-04-21T10:42:05.551006087Z" level=info msg="StartContainer for \"c03ab31793ab80d7937fce3870f4522561ab32b2badfc9dd1181c4e4d29974f1\" returns successfully" Apr 21 10:42:05.551066 containerd[1463]: time="2026-04-21T10:42:05.551006179Z" level=info msg="StartContainer for \"9ea4b32bce320a0782528ba55e09c5c66ab8523d3599123eec429066290e7d5e\" returns successfully" Apr 21 10:42:06.359319 kubelet[2511]: E0421 10:42:06.359158 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:06.362143 kubelet[2511]: E0421 10:42:06.361793 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:06.378716 kubelet[2511]: I0421 10:42:06.378352 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9q5ds" podStartSLOduration=21.378335703 podStartE2EDuration="21.378335703s" podCreationTimestamp="2026-04-21 10:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:42:06.370225105 +0000 UTC m=+28.169757359" watchObservedRunningTime="2026-04-21 10:42:06.378335703 +0000 UTC m=+28.177867957" Apr 21 10:42:06.389079 kubelet[2511]: I0421 10:42:06.389025 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2ccrh" podStartSLOduration=21.389014199000002 podStartE2EDuration="21.389014199s" podCreationTimestamp="2026-04-21 10:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:42:06.388219306 +0000 UTC m=+28.187751554" watchObservedRunningTime="2026-04-21 10:42:06.389014199 +0000 UTC m=+28.188546453" Apr 21 10:42:06.819794 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:33240.service - OpenSSH per-connection server daemon (10.0.0.1:33240). Apr 21 10:42:06.851356 sshd[3925]: Accepted publickey for core from 10.0.0.1 port 33240 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:06.852533 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:06.856431 systemd-logind[1446]: New session 9 of user core. Apr 21 10:42:06.874786 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:42:06.999748 sshd[3925]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:07.002487 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:33240.service: Deactivated successfully. Apr 21 10:42:07.003804 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:42:07.004247 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:42:07.005060 systemd-logind[1446]: Removed session 9. Apr 21 10:42:07.363797 kubelet[2511]: E0421 10:42:07.363756 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:07.364222 kubelet[2511]: E0421 10:42:07.363898 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:08.366140 kubelet[2511]: E0421 10:42:08.366103 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:08.366140 kubelet[2511]: E0421 10:42:08.366187 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:12.015061 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:33250.service - OpenSSH per-connection server daemon (10.0.0.1:33250). Apr 21 10:42:12.042030 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 33250 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:12.042999 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:12.046572 systemd-logind[1446]: New session 10 of user core. Apr 21 10:42:12.050766 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:42:12.146570 sshd[3943]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:12.153764 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:33250.service: Deactivated successfully. Apr 21 10:42:12.154925 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:42:12.155953 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:42:12.162863 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:33256.service - OpenSSH per-connection server daemon (10.0.0.1:33256). Apr 21 10:42:12.163983 systemd-logind[1446]: Removed session 10. Apr 21 10:42:12.187223 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 33256 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:12.188658 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:12.191797 systemd-logind[1446]: New session 11 of user core. Apr 21 10:42:12.201747 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:42:12.338660 sshd[3958]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:12.349260 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:33256.service: Deactivated successfully. Apr 21 10:42:12.354056 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:42:12.359104 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:42:12.368855 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:33272.service - OpenSSH per-connection server daemon (10.0.0.1:33272). Apr 21 10:42:12.369733 systemd-logind[1446]: Removed session 11. Apr 21 10:42:12.405503 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 33272 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:12.406821 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:12.410450 systemd-logind[1446]: New session 12 of user core. Apr 21 10:42:12.419802 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:42:12.518977 sshd[3970]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:12.521466 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:33272.service: Deactivated successfully. Apr 21 10:42:12.522697 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:42:12.523209 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:42:12.523935 systemd-logind[1446]: Removed session 12. Apr 21 10:42:17.533322 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:33408.service - OpenSSH per-connection server daemon (10.0.0.1:33408). Apr 21 10:42:17.561439 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 33408 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:17.562524 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:17.566045 systemd-logind[1446]: New session 13 of user core. Apr 21 10:42:17.574914 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:42:17.679228 sshd[3987]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:17.682176 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:33408.service: Deactivated successfully. Apr 21 10:42:17.683386 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:42:17.683881 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:42:17.684724 systemd-logind[1446]: Removed session 13. Apr 21 10:42:22.692255 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:33418.service - OpenSSH per-connection server daemon (10.0.0.1:33418). Apr 21 10:42:22.757199 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 33418 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:22.758838 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:22.764228 systemd-logind[1446]: New session 14 of user core. Apr 21 10:42:22.784890 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:42:22.964107 sshd[4001]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:22.979431 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:33418.service: Deactivated successfully. Apr 21 10:42:22.988125 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:42:22.991088 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:42:23.016477 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:33422.service - OpenSSH per-connection server daemon (10.0.0.1:33422). Apr 21 10:42:23.017647 systemd-logind[1446]: Removed session 14. Apr 21 10:42:23.062423 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 33422 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:23.064575 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:23.076243 systemd-logind[1446]: New session 15 of user core. Apr 21 10:42:23.089890 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:42:23.413397 sshd[4015]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:23.422872 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:33422.service: Deactivated successfully. Apr 21 10:42:23.424184 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:42:23.425387 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:42:23.426528 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:33424.service - OpenSSH per-connection server daemon (10.0.0.1:33424). Apr 21 10:42:23.427122 systemd-logind[1446]: Removed session 15. Apr 21 10:42:23.456846 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 33424 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:23.457904 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:23.461063 systemd-logind[1446]: New session 16 of user core. Apr 21 10:42:23.470762 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:42:24.162855 sshd[4028]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:24.168776 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:33424.service: Deactivated successfully. Apr 21 10:42:24.171007 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:42:24.173234 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:42:24.180788 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:33426.service - OpenSSH per-connection server daemon (10.0.0.1:33426). Apr 21 10:42:24.181921 systemd-logind[1446]: Removed session 16. Apr 21 10:42:24.209312 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 33426 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:24.210521 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:24.213946 systemd-logind[1446]: New session 17 of user core. Apr 21 10:42:24.218774 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:42:24.445880 sshd[4048]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:24.455383 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:33426.service: Deactivated successfully. Apr 21 10:42:24.456584 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:42:24.457707 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:42:24.458970 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:33432.service - OpenSSH per-connection server daemon (10.0.0.1:33432). Apr 21 10:42:24.459491 systemd-logind[1446]: Removed session 17. Apr 21 10:42:24.486852 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 33432 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:24.488087 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:24.493373 systemd-logind[1446]: New session 18 of user core. Apr 21 10:42:24.503992 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:42:24.625849 sshd[4061]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:24.628313 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:33432.service: Deactivated successfully. Apr 21 10:42:24.630219 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:42:24.631928 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:42:24.632908 systemd-logind[1446]: Removed session 18. Apr 21 10:42:29.638004 systemd[1]: Started sshd@18-10.0.0.129:22-10.0.0.1:41352.service - OpenSSH per-connection server daemon (10.0.0.1:41352). Apr 21 10:42:29.665889 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 41352 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:29.667157 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:29.671037 systemd-logind[1446]: New session 19 of user core. Apr 21 10:42:29.676879 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:42:29.773698 sshd[4078]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:29.776224 systemd[1]: sshd@18-10.0.0.129:22-10.0.0.1:41352.service: Deactivated successfully. Apr 21 10:42:29.777511 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:42:29.778102 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:42:29.779014 systemd-logind[1446]: Removed session 19. Apr 21 10:42:34.785568 systemd[1]: Started sshd@19-10.0.0.129:22-10.0.0.1:41354.service - OpenSSH per-connection server daemon (10.0.0.1:41354). Apr 21 10:42:34.813593 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 41354 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:34.814693 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:34.818594 systemd-logind[1446]: New session 20 of user core. Apr 21 10:42:34.828824 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:42:34.926319 sshd[4093]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:34.936705 systemd[1]: sshd@19-10.0.0.129:22-10.0.0.1:41354.service: Deactivated successfully. Apr 21 10:42:34.937969 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:42:34.939032 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:42:34.946837 systemd[1]: Started sshd@20-10.0.0.129:22-10.0.0.1:41362.service - OpenSSH per-connection server daemon (10.0.0.1:41362). Apr 21 10:42:34.947481 systemd-logind[1446]: Removed session 20. Apr 21 10:42:34.971313 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:34.972284 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:34.975992 systemd-logind[1446]: New session 21 of user core. Apr 21 10:42:34.992784 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:42:36.884782 containerd[1463]: time="2026-04-21T10:42:36.884736663Z" level=info msg="StopContainer for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" with timeout 30 (s)" Apr 21 10:42:36.890377 containerd[1463]: time="2026-04-21T10:42:36.890328979Z" level=info msg="Stop container \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" with signal terminated" Apr 21 10:42:36.913886 containerd[1463]: time="2026-04-21T10:42:36.913822310Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:42:36.925026 containerd[1463]: time="2026-04-21T10:42:36.924899931Z" level=info msg="StopContainer for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" with timeout 2 (s)" Apr 21 10:42:36.936339 containerd[1463]: time="2026-04-21T10:42:36.933273258Z" level=info msg="Stop container \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" with signal terminated" Apr 21 10:42:36.941572 systemd[1]: cri-containerd-da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623.scope: Deactivated successfully. Apr 21 10:42:36.958581 systemd-networkd[1386]: lxc_health: Link DOWN Apr 21 10:42:36.958590 systemd-networkd[1386]: lxc_health: Lost carrier Apr 21 10:42:37.011456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623-rootfs.mount: Deactivated successfully. Apr 21 10:42:37.015100 systemd[1]: cri-containerd-df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875.scope: Deactivated successfully. Apr 21 10:42:37.017106 systemd[1]: cri-containerd-df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875.scope: Consumed 5.452s CPU time. Apr 21 10:42:37.052162 containerd[1463]: time="2026-04-21T10:42:37.051982282Z" level=info msg="shim disconnected" id=da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623 namespace=k8s.io Apr 21 10:42:37.052464 containerd[1463]: time="2026-04-21T10:42:37.052261131Z" level=warning msg="cleaning up after shim disconnected" id=da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623 namespace=k8s.io Apr 21 10:42:37.052464 containerd[1463]: time="2026-04-21T10:42:37.052277397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:37.067377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875-rootfs.mount: Deactivated successfully. Apr 21 10:42:37.078644 containerd[1463]: time="2026-04-21T10:42:37.078441211Z" level=info msg="shim disconnected" id=df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875 namespace=k8s.io Apr 21 10:42:37.078644 containerd[1463]: time="2026-04-21T10:42:37.078556839Z" level=warning msg="cleaning up after shim disconnected" id=df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875 namespace=k8s.io Apr 21 10:42:37.078644 containerd[1463]: time="2026-04-21T10:42:37.078593651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:37.108569 containerd[1463]: time="2026-04-21T10:42:37.106095845Z" level=info msg="StopContainer for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" returns successfully" Apr 21 10:42:37.114846 containerd[1463]: time="2026-04-21T10:42:37.114807193Z" level=info msg="StopPodSandbox for \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\"" Apr 21 10:42:37.117700 containerd[1463]: time="2026-04-21T10:42:37.115001809Z" level=info msg="Container to stop \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:42:37.126088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0-shm.mount: Deactivated successfully. Apr 21 10:42:37.137159 containerd[1463]: time="2026-04-21T10:42:37.136963632Z" level=info msg="StopContainer for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" returns successfully" Apr 21 10:42:37.145159 containerd[1463]: time="2026-04-21T10:42:37.144798569Z" level=info msg="StopPodSandbox for \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\"" Apr 21 10:42:37.145159 containerd[1463]: time="2026-04-21T10:42:37.144888657Z" level=info msg="Container to stop \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:42:37.145159 containerd[1463]: time="2026-04-21T10:42:37.144938706Z" level=info msg="Container to stop \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:42:37.145159 containerd[1463]: time="2026-04-21T10:42:37.144951611Z" level=info msg="Container to stop \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:42:37.145159 containerd[1463]: time="2026-04-21T10:42:37.144964402Z" level=info msg="Container to stop \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:42:37.145159 containerd[1463]: time="2026-04-21T10:42:37.144976728Z" level=info msg="Container to stop \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:42:37.156869 systemd[1]: cri-containerd-7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0.scope: Deactivated successfully. Apr 21 10:42:37.174837 systemd[1]: cri-containerd-824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454.scope: Deactivated successfully. Apr 21 10:42:37.218789 containerd[1463]: time="2026-04-21T10:42:37.218363428Z" level=info msg="shim disconnected" id=824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454 namespace=k8s.io Apr 21 10:42:37.218789 containerd[1463]: time="2026-04-21T10:42:37.218416756Z" level=warning msg="cleaning up after shim disconnected" id=824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454 namespace=k8s.io Apr 21 10:42:37.218789 containerd[1463]: time="2026-04-21T10:42:37.218426333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:37.219808 containerd[1463]: time="2026-04-21T10:42:37.219509555Z" level=info msg="shim disconnected" id=7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0 namespace=k8s.io Apr 21 10:42:37.219808 containerd[1463]: time="2026-04-21T10:42:37.219659753Z" level=warning msg="cleaning up after shim disconnected" id=7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0 namespace=k8s.io Apr 21 10:42:37.219808 containerd[1463]: time="2026-04-21T10:42:37.219674304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:37.245928 containerd[1463]: time="2026-04-21T10:42:37.245844700Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:42:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:42:37.258029 containerd[1463]: time="2026-04-21T10:42:37.257933295Z" level=info msg="TearDown network for sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" successfully" Apr 21 10:42:37.258029 containerd[1463]: time="2026-04-21T10:42:37.258017854Z" level=info msg="StopPodSandbox for \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" returns successfully" Apr 21 10:42:37.259049 containerd[1463]: time="2026-04-21T10:42:37.258992165Z" level=info msg="TearDown network for sandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" successfully" Apr 21 10:42:37.259049 containerd[1463]: time="2026-04-21T10:42:37.259044082Z" level=info msg="StopPodSandbox for \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" returns successfully" Apr 21 10:42:37.382052 kubelet[2511]: I0421 10:42:37.381914 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-hubble-tls\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382052 kubelet[2511]: I0421 10:42:37.382014 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxkxs\" (UniqueName: \"kubernetes.io/projected/4aec8b95-943d-49cb-93f5-673d3c8bc120-kube-api-access-hxkxs\") pod \"4aec8b95-943d-49cb-93f5-673d3c8bc120\" (UID: \"4aec8b95-943d-49cb-93f5-673d3c8bc120\") " Apr 21 10:42:37.382052 kubelet[2511]: I0421 10:42:37.382049 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cni-path\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382052 kubelet[2511]: I0421 10:42:37.382077 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x82sk\" (UniqueName: \"kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-kube-api-access-x82sk\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382830 kubelet[2511]: I0421 10:42:37.382102 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-config-path\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382830 kubelet[2511]: I0421 10:42:37.382119 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-cgroup\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382830 kubelet[2511]: I0421 10:42:37.382140 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-net\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382830 kubelet[2511]: I0421 10:42:37.382158 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-etc-cni-netd\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.382830 kubelet[2511]: I0421 10:42:37.382179 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aec8b95-943d-49cb-93f5-673d3c8bc120-cilium-config-path\") pod \"4aec8b95-943d-49cb-93f5-673d3c8bc120\" (UID: \"4aec8b95-943d-49cb-93f5-673d3c8bc120\") " Apr 21 10:42:37.382830 kubelet[2511]: I0421 10:42:37.382198 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-xtables-lock\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383014 kubelet[2511]: I0421 10:42:37.382216 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-hostproc\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383014 kubelet[2511]: I0421 10:42:37.382263 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-bpf-maps\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383014 kubelet[2511]: I0421 10:42:37.382288 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/110b2f98-bbca-4886-ab15-a251c87179b0-clustermesh-secrets\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383014 kubelet[2511]: I0421 10:42:37.382306 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-run\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383014 kubelet[2511]: I0421 10:42:37.382326 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-lib-modules\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383014 kubelet[2511]: I0421 10:42:37.382344 2511 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-kernel\") pod \"110b2f98-bbca-4886-ab15-a251c87179b0\" (UID: \"110b2f98-bbca-4886-ab15-a251c87179b0\") " Apr 21 10:42:37.383187 kubelet[2511]: I0421 10:42:37.382454 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394419 kubelet[2511]: I0421 10:42:37.391458 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394419 kubelet[2511]: I0421 10:42:37.391511 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394419 kubelet[2511]: I0421 10:42:37.391530 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394419 kubelet[2511]: I0421 10:42:37.393640 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:42:37.394419 kubelet[2511]: I0421 10:42:37.393682 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394992 kubelet[2511]: I0421 10:42:37.394737 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394992 kubelet[2511]: I0421 10:42:37.394768 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394992 kubelet[2511]: I0421 10:42:37.394785 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.394992 kubelet[2511]: I0421 10:42:37.394802 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.396250 kubelet[2511]: I0421 10:42:37.396187 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:42:37.396813 kubelet[2511]: I0421 10:42:37.396783 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aec8b95-943d-49cb-93f5-673d3c8bc120-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4aec8b95-943d-49cb-93f5-673d3c8bc120" (UID: "4aec8b95-943d-49cb-93f5-673d3c8bc120"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:42:37.403932 kubelet[2511]: I0421 10:42:37.403835 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aec8b95-943d-49cb-93f5-673d3c8bc120-kube-api-access-hxkxs" (OuterVolumeSpecName: "kube-api-access-hxkxs") pod "4aec8b95-943d-49cb-93f5-673d3c8bc120" (UID: "4aec8b95-943d-49cb-93f5-673d3c8bc120"). InnerVolumeSpecName "kube-api-access-hxkxs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:42:37.405451 kubelet[2511]: I0421 10:42:37.405423 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:42:37.405569 kubelet[2511]: I0421 10:42:37.405427 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110b2f98-bbca-4886-ab15-a251c87179b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:42:37.409493 kubelet[2511]: I0421 10:42:37.409385 2511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-kube-api-access-x82sk" (OuterVolumeSpecName: "kube-api-access-x82sk") pod "110b2f98-bbca-4886-ab15-a251c87179b0" (UID: "110b2f98-bbca-4886-ab15-a251c87179b0"). InnerVolumeSpecName "kube-api-access-x82sk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:42:37.441304 kubelet[2511]: I0421 10:42:37.439094 2511 scope.go:117] "RemoveContainer" containerID="da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623" Apr 21 10:42:37.443160 containerd[1463]: time="2026-04-21T10:42:37.443004373Z" level=info msg="RemoveContainer for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\"" Apr 21 10:42:37.448838 systemd[1]: Removed slice kubepods-besteffort-pod4aec8b95_943d_49cb_93f5_673d3c8bc120.slice - libcontainer container kubepods-besteffort-pod4aec8b95_943d_49cb_93f5_673d3c8bc120.slice. Apr 21 10:42:37.458472 containerd[1463]: time="2026-04-21T10:42:37.454452928Z" level=info msg="RemoveContainer for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" returns successfully" Apr 21 10:42:37.460772 kubelet[2511]: I0421 10:42:37.454831 2511 scope.go:117] "RemoveContainer" containerID="da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623" Apr 21 10:42:37.459398 systemd[1]: Removed slice kubepods-burstable-pod110b2f98_bbca_4886_ab15_a251c87179b0.slice - libcontainer container kubepods-burstable-pod110b2f98_bbca_4886_ab15_a251c87179b0.slice. Apr 21 10:42:37.460914 containerd[1463]: time="2026-04-21T10:42:37.459736767Z" level=error msg="ContainerStatus for \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\": not found" Apr 21 10:42:37.459572 systemd[1]: kubepods-burstable-pod110b2f98_bbca_4886_ab15_a251c87179b0.slice: Consumed 5.518s CPU time. Apr 21 10:42:37.483955 kubelet[2511]: E0421 10:42:37.483909 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\": not found" containerID="da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623" Apr 21 10:42:37.484313 kubelet[2511]: I0421 10:42:37.484203 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623"} err="failed to get container status \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\": rpc error: code = NotFound desc = an error occurred when try to find container \"da55f79c791b27172a7ed7a60d3c840afc13998d90d5e3f2fe139c950d73f623\": not found" Apr 21 10:42:37.484441 kubelet[2511]: I0421 10:42:37.484429 2511 scope.go:117] "RemoveContainer" containerID="df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484818 2511 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484835 2511 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484846 2511 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484859 2511 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/110b2f98-bbca-4886-ab15-a251c87179b0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484875 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484885 2511 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484895 2511 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.485036 kubelet[2511]: I0421 10:42:37.484906 2511 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484921 2511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxkxs\" (UniqueName: \"kubernetes.io/projected/4aec8b95-943d-49cb-93f5-673d3c8bc120-kube-api-access-hxkxs\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484933 2511 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484943 2511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x82sk\" (UniqueName: \"kubernetes.io/projected/110b2f98-bbca-4886-ab15-a251c87179b0-kube-api-access-x82sk\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484958 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484969 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484980 2511 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.484991 2511 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/110b2f98-bbca-4886-ab15-a251c87179b0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.487604 kubelet[2511]: I0421 10:42:37.485004 2511 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aec8b95-943d-49cb-93f5-673d3c8bc120-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 21 10:42:37.497659 containerd[1463]: time="2026-04-21T10:42:37.493443364Z" level=info msg="RemoveContainer for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\"" Apr 21 10:42:37.510325 containerd[1463]: time="2026-04-21T10:42:37.508982187Z" level=info msg="RemoveContainer for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" returns successfully" Apr 21 10:42:37.510493 kubelet[2511]: I0421 10:42:37.509674 2511 scope.go:117] "RemoveContainer" containerID="cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a" Apr 21 10:42:37.520383 containerd[1463]: time="2026-04-21T10:42:37.520300152Z" level=info msg="RemoveContainer for \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\"" Apr 21 10:42:37.525748 containerd[1463]: time="2026-04-21T10:42:37.525603921Z" level=info msg="RemoveContainer for \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\" returns successfully" Apr 21 10:42:37.526054 kubelet[2511]: I0421 10:42:37.526004 2511 scope.go:117] "RemoveContainer" containerID="6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f" Apr 21 10:42:37.530987 containerd[1463]: time="2026-04-21T10:42:37.530863405Z" level=info msg="RemoveContainer for \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\"" Apr 21 10:42:37.538416 containerd[1463]: time="2026-04-21T10:42:37.538343756Z" level=info msg="RemoveContainer for \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\" returns successfully" Apr 21 10:42:37.538744 kubelet[2511]: I0421 10:42:37.538704 2511 scope.go:117] "RemoveContainer" containerID="70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967" Apr 21 10:42:37.541326 containerd[1463]: time="2026-04-21T10:42:37.540252960Z" level=info msg="RemoveContainer for \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\"" Apr 21 10:42:37.572766 containerd[1463]: time="2026-04-21T10:42:37.572655316Z" level=info msg="RemoveContainer for \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\" returns successfully" Apr 21 10:42:37.574386 kubelet[2511]: I0421 10:42:37.573387 2511 scope.go:117] "RemoveContainer" containerID="84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f" Apr 21 10:42:37.576650 containerd[1463]: time="2026-04-21T10:42:37.576581904Z" level=info msg="RemoveContainer for \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\"" Apr 21 10:42:37.583545 containerd[1463]: time="2026-04-21T10:42:37.583460359Z" level=info msg="RemoveContainer for \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\" returns successfully" Apr 21 10:42:37.584285 kubelet[2511]: I0421 10:42:37.584094 2511 scope.go:117] "RemoveContainer" containerID="df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875" Apr 21 10:42:37.584463 containerd[1463]: time="2026-04-21T10:42:37.584403941Z" level=error msg="ContainerStatus for \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\": not found" Apr 21 10:42:37.584717 kubelet[2511]: E0421 10:42:37.584570 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\": not found" containerID="df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875" Apr 21 10:42:37.584717 kubelet[2511]: I0421 10:42:37.584652 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875"} err="failed to get container status \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\": rpc error: code = NotFound desc = an error occurred when try to find container \"df72e37ff88349e1034a59ce87553a65091abb80a0c7764c97ef3784fd476875\": not found" Apr 21 10:42:37.584717 kubelet[2511]: I0421 10:42:37.584678 2511 scope.go:117] "RemoveContainer" containerID="cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a" Apr 21 10:42:37.585033 containerd[1463]: time="2026-04-21T10:42:37.584952773Z" level=error msg="ContainerStatus for \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\": not found" Apr 21 10:42:37.585248 kubelet[2511]: E0421 10:42:37.585181 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\": not found" containerID="cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a" Apr 21 10:42:37.585248 kubelet[2511]: I0421 10:42:37.585222 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a"} err="failed to get container status \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf4d73ae79c8535f6529c2ee8a494426b7826e7c1908fede8385d4e65a2d067a\": not found" Apr 21 10:42:37.585338 kubelet[2511]: I0421 10:42:37.585257 2511 scope.go:117] "RemoveContainer" containerID="6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f" Apr 21 10:42:37.585718 containerd[1463]: time="2026-04-21T10:42:37.585601212Z" level=error msg="ContainerStatus for \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\": not found" Apr 21 10:42:37.586735 kubelet[2511]: E0421 10:42:37.586579 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\": not found" containerID="6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f" Apr 21 10:42:37.586735 kubelet[2511]: I0421 10:42:37.586643 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f"} err="failed to get container status \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d3b1e286fc68a1c25a92d3fc33d2064ae296f3c5e721334cdbf8f6b00d66d6f\": not found" Apr 21 10:42:37.586735 kubelet[2511]: I0421 10:42:37.586672 2511 scope.go:117] "RemoveContainer" containerID="70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967" Apr 21 10:42:37.587298 containerd[1463]: time="2026-04-21T10:42:37.587221012Z" level=error msg="ContainerStatus for \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\": not found" Apr 21 10:42:37.587589 kubelet[2511]: E0421 10:42:37.587398 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\": not found" containerID="70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967" Apr 21 10:42:37.587589 kubelet[2511]: I0421 10:42:37.587595 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967"} err="failed to get container status \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\": rpc error: code = NotFound desc = an error occurred when try to find container \"70f62c6197a696dc3c52dc0a6b341dd5166f78c2a8b6c559160aa515e27bb967\": not found" Apr 21 10:42:37.587589 kubelet[2511]: I0421 10:42:37.587667 2511 scope.go:117] "RemoveContainer" containerID="84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f" Apr 21 10:42:37.587589 kubelet[2511]: E0421 10:42:37.588152 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\": not found" containerID="84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f" Apr 21 10:42:37.587589 kubelet[2511]: I0421 10:42:37.588173 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f"} err="failed to get container status \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\": rpc error: code = NotFound desc = an error occurred when try to find container \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\": not found" Apr 21 10:42:37.591677 containerd[1463]: time="2026-04-21T10:42:37.588007335Z" level=error msg="ContainerStatus for \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84ccf35c6e6b4a7f1fa11462a383f69b57c1776268358db51eb6b4ccdd4c984f\": not found" Apr 21 10:42:37.863590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0-rootfs.mount: Deactivated successfully. Apr 21 10:42:37.863769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454-rootfs.mount: Deactivated successfully. Apr 21 10:42:37.863833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454-shm.mount: Deactivated successfully. Apr 21 10:42:37.863903 systemd[1]: var-lib-kubelet-pods-4aec8b95\x2d943d\x2d49cb\x2d93f5\x2d673d3c8bc120-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxkxs.mount: Deactivated successfully. Apr 21 10:42:37.863983 systemd[1]: var-lib-kubelet-pods-110b2f98\x2dbbca\x2d4886\x2dab15\x2da251c87179b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx82sk.mount: Deactivated successfully. Apr 21 10:42:37.864048 systemd[1]: var-lib-kubelet-pods-110b2f98\x2dbbca\x2d4886\x2dab15\x2da251c87179b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 10:42:37.864140 systemd[1]: var-lib-kubelet-pods-110b2f98\x2dbbca\x2d4886\x2dab15\x2da251c87179b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 10:42:38.257995 containerd[1463]: time="2026-04-21T10:42:38.255809678Z" level=info msg="StopPodSandbox for \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\"" Apr 21 10:42:38.257995 containerd[1463]: time="2026-04-21T10:42:38.255896298Z" level=info msg="TearDown network for sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" successfully" Apr 21 10:42:38.257995 containerd[1463]: time="2026-04-21T10:42:38.255907638Z" level=info msg="StopPodSandbox for \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" returns successfully" Apr 21 10:42:38.257995 containerd[1463]: time="2026-04-21T10:42:38.256181527Z" level=info msg="RemovePodSandbox for \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\"" Apr 21 10:42:38.257995 containerd[1463]: time="2026-04-21T10:42:38.256200546Z" level=info msg="Forcibly stopping sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\"" Apr 21 10:42:38.257995 containerd[1463]: time="2026-04-21T10:42:38.256246519Z" level=info msg="TearDown network for sandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" successfully" Apr 21 10:42:38.265810 containerd[1463]: time="2026-04-21T10:42:38.265718046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:42:38.265810 containerd[1463]: time="2026-04-21T10:42:38.265796824Z" level=info msg="RemovePodSandbox \"824c360619399a11206909193aaaf1d346fe8153f0315516a003b045ff070454\" returns successfully" Apr 21 10:42:38.266493 containerd[1463]: time="2026-04-21T10:42:38.266436915Z" level=info msg="StopPodSandbox for \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\"" Apr 21 10:42:38.266567 containerd[1463]: time="2026-04-21T10:42:38.266548380Z" level=info msg="TearDown network for sandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" successfully" Apr 21 10:42:38.266604 containerd[1463]: time="2026-04-21T10:42:38.266564214Z" level=info msg="StopPodSandbox for \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" returns successfully" Apr 21 10:42:38.266958 containerd[1463]: time="2026-04-21T10:42:38.266911380Z" level=info msg="RemovePodSandbox for \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\"" Apr 21 10:42:38.266958 containerd[1463]: time="2026-04-21T10:42:38.266952937Z" level=info msg="Forcibly stopping sandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\"" Apr 21 10:42:38.267027 containerd[1463]: time="2026-04-21T10:42:38.267011611Z" level=info msg="TearDown network for sandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" successfully" Apr 21 10:42:38.273230 containerd[1463]: time="2026-04-21T10:42:38.272404529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:42:38.274288 containerd[1463]: time="2026-04-21T10:42:38.273671638Z" level=info msg="RemovePodSandbox \"7aacb407d39697c951d21f1a27c343ed56640a4c697d168a8f04a3fab29005a0\" returns successfully" Apr 21 10:42:38.276240 kubelet[2511]: I0421 10:42:38.276185 2511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110b2f98-bbca-4886-ab15-a251c87179b0" path="/var/lib/kubelet/pods/110b2f98-bbca-4886-ab15-a251c87179b0/volumes" Apr 21 10:42:38.278358 kubelet[2511]: I0421 10:42:38.276951 2511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aec8b95-943d-49cb-93f5-673d3c8bc120" path="/var/lib/kubelet/pods/4aec8b95-943d-49cb-93f5-673d3c8bc120/volumes" Apr 21 10:42:38.307462 kubelet[2511]: E0421 10:42:38.307387 2511 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:42:38.763453 sshd[4108]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:38.772767 systemd[1]: sshd@20-10.0.0.129:22-10.0.0.1:41362.service: Deactivated successfully. Apr 21 10:42:38.775552 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:42:38.775784 systemd[1]: session-21.scope: Consumed 1.200s CPU time. Apr 21 10:42:38.777192 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:42:38.784415 systemd[1]: Started sshd@21-10.0.0.129:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Apr 21 10:42:38.785469 systemd-logind[1446]: Removed session 21. Apr 21 10:42:38.818541 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:38.820462 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:38.827919 systemd-logind[1446]: New session 22 of user core. Apr 21 10:42:38.840111 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:42:39.668300 sshd[4271]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:39.678983 systemd[1]: sshd@21-10.0.0.129:22-10.0.0.1:37254.service: Deactivated successfully. Apr 21 10:42:39.681113 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:42:39.686794 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:42:39.703460 systemd[1]: Started sshd@22-10.0.0.129:22-10.0.0.1:37262.service - OpenSSH per-connection server daemon (10.0.0.1:37262). Apr 21 10:42:39.706231 systemd-logind[1446]: Removed session 22. Apr 21 10:42:39.726522 systemd[1]: Created slice kubepods-burstable-podbaffea1c_ff9e_4664_be07_f21cb5f45553.slice - libcontainer container kubepods-burstable-podbaffea1c_ff9e_4664_be07_f21cb5f45553.slice. Apr 21 10:42:39.768011 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 37262 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:39.768780 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:39.775985 systemd-logind[1446]: New session 23 of user core. Apr 21 10:42:39.786072 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:42:39.800369 kubelet[2511]: I0421 10:42:39.800196 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baffea1c-ff9e-4664-be07-f21cb5f45553-cilium-config-path\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800369 kubelet[2511]: I0421 10:42:39.800259 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/baffea1c-ff9e-4664-be07-f21cb5f45553-cilium-ipsec-secrets\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800810 kubelet[2511]: I0421 10:42:39.800448 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-host-proc-sys-kernel\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800810 kubelet[2511]: I0421 10:42:39.800550 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-cilium-cgroup\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800810 kubelet[2511]: I0421 10:42:39.800576 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-cni-path\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800810 kubelet[2511]: I0421 10:42:39.800600 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfb6\" (UniqueName: \"kubernetes.io/projected/baffea1c-ff9e-4664-be07-f21cb5f45553-kube-api-access-5wfb6\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800810 kubelet[2511]: I0421 10:42:39.800665 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/baffea1c-ff9e-4664-be07-f21cb5f45553-hubble-tls\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800810 kubelet[2511]: I0421 10:42:39.800685 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-hostproc\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800915 kubelet[2511]: I0421 10:42:39.800704 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-bpf-maps\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800915 kubelet[2511]: I0421 10:42:39.800715 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-xtables-lock\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800915 kubelet[2511]: I0421 10:42:39.800731 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-etc-cni-netd\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800915 kubelet[2511]: I0421 10:42:39.800744 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/baffea1c-ff9e-4664-be07-f21cb5f45553-clustermesh-secrets\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800915 kubelet[2511]: I0421 10:42:39.800755 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-host-proc-sys-net\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.800915 kubelet[2511]: I0421 10:42:39.800767 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-cilium-run\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.801005 kubelet[2511]: I0421 10:42:39.800777 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baffea1c-ff9e-4664-be07-f21cb5f45553-lib-modules\") pod \"cilium-t89cx\" (UID: \"baffea1c-ff9e-4664-be07-f21cb5f45553\") " pod="kube-system/cilium-t89cx" Apr 21 10:42:39.839243 sshd[4284]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:39.854392 systemd[1]: sshd@22-10.0.0.129:22-10.0.0.1:37262.service: Deactivated successfully. Apr 21 10:42:39.856681 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:42:39.858268 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:42:39.860263 kubelet[2511]: I0421 10:42:39.860183 2511 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T10:42:39Z","lastTransitionTime":"2026-04-21T10:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 10:42:39.863249 systemd[1]: Started sshd@23-10.0.0.129:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). Apr 21 10:42:39.866204 systemd-logind[1446]: Removed session 23. Apr 21 10:42:39.896838 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:42:39.899186 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:42:39.906451 systemd-logind[1446]: New session 24 of user core. Apr 21 10:42:39.929043 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:42:40.037748 kubelet[2511]: E0421 10:42:40.037695 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:40.038423 containerd[1463]: time="2026-04-21T10:42:40.038339907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t89cx,Uid:baffea1c-ff9e-4664-be07-f21cb5f45553,Namespace:kube-system,Attempt:0,}" Apr 21 10:42:40.102049 containerd[1463]: time="2026-04-21T10:42:40.101845956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:42:40.102188 containerd[1463]: time="2026-04-21T10:42:40.102059709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:42:40.102888 containerd[1463]: time="2026-04-21T10:42:40.102802084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:42:40.102957 containerd[1463]: time="2026-04-21T10:42:40.102922175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:42:40.136838 systemd[1]: Started cri-containerd-d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f.scope - libcontainer container d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f. Apr 21 10:42:40.169046 containerd[1463]: time="2026-04-21T10:42:40.168966421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t89cx,Uid:baffea1c-ff9e-4664-be07-f21cb5f45553,Namespace:kube-system,Attempt:0,} returns sandbox id \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\"" Apr 21 10:42:40.171194 kubelet[2511]: E0421 10:42:40.171135 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:40.185063 containerd[1463]: time="2026-04-21T10:42:40.184851531Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:42:40.199222 containerd[1463]: time="2026-04-21T10:42:40.199056792Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31\"" Apr 21 10:42:40.199985 containerd[1463]: time="2026-04-21T10:42:40.199932718Z" level=info msg="StartContainer for \"20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31\"" Apr 21 10:42:40.232050 systemd[1]: Started cri-containerd-20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31.scope - libcontainer container 20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31. Apr 21 10:42:40.262521 containerd[1463]: time="2026-04-21T10:42:40.262471378Z" level=info msg="StartContainer for \"20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31\" returns successfully" Apr 21 10:42:40.273399 systemd[1]: cri-containerd-20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31.scope: Deactivated successfully. Apr 21 10:42:40.309308 containerd[1463]: time="2026-04-21T10:42:40.309242203Z" level=info msg="shim disconnected" id=20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31 namespace=k8s.io Apr 21 10:42:40.309308 containerd[1463]: time="2026-04-21T10:42:40.309300990Z" level=warning msg="cleaning up after shim disconnected" id=20e496918bbdf542fb79d58db5cb89ba12c84d73e2e39524c702e0feb5ab9a31 namespace=k8s.io Apr 21 10:42:40.309308 containerd[1463]: time="2026-04-21T10:42:40.309308128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:40.457431 kubelet[2511]: E0421 10:42:40.457238 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:40.464257 containerd[1463]: time="2026-04-21T10:42:40.464180064Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:42:40.483204 containerd[1463]: time="2026-04-21T10:42:40.483096606Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6\"" Apr 21 10:42:40.483789 containerd[1463]: time="2026-04-21T10:42:40.483761827Z" level=info msg="StartContainer for \"ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6\"" Apr 21 10:42:40.517199 systemd[1]: Started cri-containerd-ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6.scope - libcontainer container ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6. Apr 21 10:42:40.548902 containerd[1463]: time="2026-04-21T10:42:40.548829996Z" level=info msg="StartContainer for \"ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6\" returns successfully" Apr 21 10:42:40.557486 systemd[1]: cri-containerd-ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6.scope: Deactivated successfully. Apr 21 10:42:40.590644 containerd[1463]: time="2026-04-21T10:42:40.590548117Z" level=info msg="shim disconnected" id=ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6 namespace=k8s.io Apr 21 10:42:40.590644 containerd[1463]: time="2026-04-21T10:42:40.590599114Z" level=warning msg="cleaning up after shim disconnected" id=ef2b6b95b9d67bf03b5fb7b1cc19f43d7f6509eb546a25eff85ed6fbd3411dd6 namespace=k8s.io Apr 21 10:42:40.590644 containerd[1463]: time="2026-04-21T10:42:40.590633199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:41.460426 kubelet[2511]: E0421 10:42:41.460391 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:41.464129 containerd[1463]: time="2026-04-21T10:42:41.464084887Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:42:41.482512 containerd[1463]: time="2026-04-21T10:42:41.482418496Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca\"" Apr 21 10:42:41.484217 containerd[1463]: time="2026-04-21T10:42:41.482999286Z" level=info msg="StartContainer for \"89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca\"" Apr 21 10:42:41.507786 systemd[1]: Started cri-containerd-89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca.scope - libcontainer container 89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca. Apr 21 10:42:41.528040 containerd[1463]: time="2026-04-21T10:42:41.527986901Z" level=info msg="StartContainer for \"89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca\" returns successfully" Apr 21 10:42:41.529192 systemd[1]: cri-containerd-89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca.scope: Deactivated successfully. Apr 21 10:42:41.550669 containerd[1463]: time="2026-04-21T10:42:41.550576265Z" level=info msg="shim disconnected" id=89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca namespace=k8s.io Apr 21 10:42:41.550669 containerd[1463]: time="2026-04-21T10:42:41.550662554Z" level=warning msg="cleaning up after shim disconnected" id=89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca namespace=k8s.io Apr 21 10:42:41.550946 containerd[1463]: time="2026-04-21T10:42:41.550672822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:41.908547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89d4d1603fcd51895c8579e4bc62847b0ea8bcdb4eb2cd95447432df9a40e8ca-rootfs.mount: Deactivated successfully. Apr 21 10:42:42.464269 kubelet[2511]: E0421 10:42:42.464199 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:42.470599 containerd[1463]: time="2026-04-21T10:42:42.470549604Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:42:42.491503 containerd[1463]: time="2026-04-21T10:42:42.491417392Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7\"" Apr 21 10:42:42.492347 containerd[1463]: time="2026-04-21T10:42:42.492206323Z" level=info msg="StartContainer for \"e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7\"" Apr 21 10:42:42.524798 systemd[1]: Started cri-containerd-e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7.scope - libcontainer container e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7. Apr 21 10:42:42.548920 systemd[1]: cri-containerd-e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7.scope: Deactivated successfully. Apr 21 10:42:42.551076 containerd[1463]: time="2026-04-21T10:42:42.551030894Z" level=info msg="StartContainer for \"e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7\" returns successfully" Apr 21 10:42:42.570775 containerd[1463]: time="2026-04-21T10:42:42.570682353Z" level=info msg="shim disconnected" id=e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7 namespace=k8s.io Apr 21 10:42:42.570775 containerd[1463]: time="2026-04-21T10:42:42.570733233Z" level=warning msg="cleaning up after shim disconnected" id=e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7 namespace=k8s.io Apr 21 10:42:42.570775 containerd[1463]: time="2026-04-21T10:42:42.570741898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:42:42.908680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e831b389786efbe79791a686018df24682b649d4e804034a925e8489aa3316c7-rootfs.mount: Deactivated successfully. Apr 21 10:42:43.309404 kubelet[2511]: E0421 10:42:43.309320 2511 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:42:43.468704 kubelet[2511]: E0421 10:42:43.468648 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:43.474066 containerd[1463]: time="2026-04-21T10:42:43.474010288Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:42:43.493017 containerd[1463]: time="2026-04-21T10:42:43.492884946Z" level=info msg="CreateContainer within sandbox \"d74814764d855ca288e79c76b6fa3c4730675ca7aaafe3064d09b008b55bfe1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64e8662eb6668f6d1ba8c73b8508c7bac747a5bd7948ee8761b96b042ee905d5\"" Apr 21 10:42:43.493654 containerd[1463]: time="2026-04-21T10:42:43.493565340Z" level=info msg="StartContainer for \"64e8662eb6668f6d1ba8c73b8508c7bac747a5bd7948ee8761b96b042ee905d5\"" Apr 21 10:42:43.520793 systemd[1]: Started cri-containerd-64e8662eb6668f6d1ba8c73b8508c7bac747a5bd7948ee8761b96b042ee905d5.scope - libcontainer container 64e8662eb6668f6d1ba8c73b8508c7bac747a5bd7948ee8761b96b042ee905d5. Apr 21 10:42:43.544283 containerd[1463]: time="2026-04-21T10:42:43.544229217Z" level=info msg="StartContainer for \"64e8662eb6668f6d1ba8c73b8508c7bac747a5bd7948ee8761b96b042ee905d5\" returns successfully" Apr 21 10:42:43.774650 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 21 10:42:44.473792 kubelet[2511]: E0421 10:42:44.473755 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:44.488388 kubelet[2511]: I0421 10:42:44.487561 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t89cx" podStartSLOduration=5.487541911 podStartE2EDuration="5.487541911s" podCreationTimestamp="2026-04-21 10:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:42:44.487274982 +0000 UTC m=+66.286807230" watchObservedRunningTime="2026-04-21 10:42:44.487541911 +0000 UTC m=+66.287074171" Apr 21 10:42:46.039758 kubelet[2511]: E0421 10:42:46.039484 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:46.394945 systemd-networkd[1386]: lxc_health: Link UP Apr 21 10:42:46.396589 systemd-networkd[1386]: lxc_health: Gained carrier Apr 21 10:42:47.273237 kubelet[2511]: E0421 10:42:47.273191 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:48.028996 systemd-networkd[1386]: lxc_health: Gained IPv6LL Apr 21 10:42:48.040133 kubelet[2511]: E0421 10:42:48.040077 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:48.276255 systemd[1]: run-containerd-runc-k8s.io-64e8662eb6668f6d1ba8c73b8508c7bac747a5bd7948ee8761b96b042ee905d5-runc.SNQfO7.mount: Deactivated successfully. Apr 21 10:42:48.480967 kubelet[2511]: E0421 10:42:48.480839 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:49.483084 kubelet[2511]: E0421 10:42:49.483024 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:42:54.603363 sshd[4292]: pam_unix(sshd:session): session closed for user core Apr 21 10:42:54.606118 systemd[1]: sshd@23-10.0.0.129:22-10.0.0.1:37268.service: Deactivated successfully. Apr 21 10:42:54.607558 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:42:54.608231 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:42:54.609064 systemd-logind[1446]: Removed session 24.