Apr 25 00:24:40.840295 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 25 00:24:40.840312 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 25 00:24:40.840416 kernel: BIOS-provided physical RAM map: Apr 25 00:24:40.840421 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 25 00:24:40.840425 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 25 00:24:40.840429 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 25 00:24:40.840434 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 25 00:24:40.840439 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 25 00:24:40.840443 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 25 00:24:40.840447 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 25 00:24:40.840453 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 25 00:24:40.840457 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 25 00:24:40.840461 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 25 00:24:40.840466 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 25 00:24:40.840471 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 25 00:24:40.840476 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 25 00:24:40.840482 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 25 00:24:40.840486 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 25 00:24:40.840491 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 25 00:24:40.840495 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 25 00:24:40.840500 kernel: NX (Execute Disable) protection: active Apr 25 00:24:40.840505 kernel: APIC: Static calls initialized Apr 25 00:24:40.840509 kernel: efi: EFI v2.7 by EDK II Apr 25 00:24:40.840514 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 25 00:24:40.840518 kernel: SMBIOS 2.8 present. Apr 25 00:24:40.840523 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 25 00:24:40.840527 kernel: Hypervisor detected: KVM Apr 25 00:24:40.840533 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 25 00:24:40.840537 kernel: kvm-clock: using sched offset of 4599772159 cycles Apr 25 00:24:40.840542 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 25 00:24:40.840547 kernel: tsc: Detected 2793.438 MHz processor Apr 25 00:24:40.840552 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 25 00:24:40.840557 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 25 00:24:40.840562 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 25 00:24:40.840566 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 25 00:24:40.840571 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 25 00:24:40.840577 kernel: Using GB pages for direct mapping Apr 25 00:24:40.840582 kernel: Secure boot disabled Apr 25 00:24:40.840587 kernel: ACPI: Early table checksum verification disabled Apr 25 00:24:40.840591 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 25 00:24:40.840599 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 25 00:24:40.840604 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:24:40.840609 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:24:40.840615 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 25 00:24:40.840620 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:24:40.840625 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:24:40.840629 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:24:40.840634 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 25 00:24:40.840639 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 25 00:24:40.840644 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 25 00:24:40.840650 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 25 00:24:40.840655 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 25 00:24:40.840660 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 25 00:24:40.840665 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 25 00:24:40.840669 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 25 00:24:40.840674 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 25 00:24:40.840679 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 25 00:24:40.840684 kernel: No NUMA configuration found Apr 25 00:24:40.840689 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 25 00:24:40.840695 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 25 00:24:40.840700 kernel: Zone ranges: Apr 25 00:24:40.840705 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 25 00:24:40.840710 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 25 00:24:40.840715 kernel: Normal empty Apr 25 00:24:40.840720 kernel: Movable zone start for each node Apr 25 00:24:40.840725 kernel: Early memory node ranges Apr 25 00:24:40.840729 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 25 00:24:40.840734 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 25 00:24:40.840739 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 25 00:24:40.840745 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 25 00:24:40.840750 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 25 00:24:40.840755 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 25 00:24:40.840760 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 25 00:24:40.840765 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 25 00:24:40.840769 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 25 00:24:40.840774 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 25 00:24:40.840779 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 25 00:24:40.840784 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 25 00:24:40.840789 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 25 00:24:40.840795 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 25 00:24:40.840800 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 25 00:24:40.840805 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 25 00:24:40.840810 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 25 00:24:40.840815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 25 00:24:40.840820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 25 00:24:40.840825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 25 00:24:40.840830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 25 00:24:40.840835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 25 00:24:40.840841 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 25 00:24:40.840846 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 25 00:24:40.840850 kernel: TSC deadline timer available Apr 25 00:24:40.840855 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 25 00:24:40.840860 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 25 00:24:40.840865 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 25 00:24:40.840870 kernel: kvm-guest: setup PV sched yield Apr 25 00:24:40.840875 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 25 00:24:40.840880 kernel: Booting paravirtualized kernel on KVM Apr 25 00:24:40.840886 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 25 00:24:40.840891 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 25 00:24:40.840896 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 25 00:24:40.840901 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 25 00:24:40.840906 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 25 00:24:40.840911 kernel: kvm-guest: PV spinlocks enabled Apr 25 00:24:40.840916 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 25 00:24:40.840922 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 25 00:24:40.840929 kernel: random: crng init done Apr 25 00:24:40.840933 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 25 00:24:40.840938 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 25 00:24:40.840943 kernel: Fallback order for Node 0: 0 Apr 25 00:24:40.840948 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 25 00:24:40.840953 kernel: Policy zone: DMA32 Apr 25 00:24:40.840958 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 25 00:24:40.840963 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167136K reserved, 0K cma-reserved) Apr 25 00:24:40.840968 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 25 00:24:40.840974 kernel: ftrace: allocating 37996 entries in 149 pages Apr 25 00:24:40.840979 kernel: ftrace: allocated 149 pages with 4 groups Apr 25 00:24:40.840984 kernel: Dynamic Preempt: voluntary Apr 25 00:24:40.840989 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 25 00:24:40.841000 kernel: rcu: RCU event tracing is enabled. Apr 25 00:24:40.841007 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 25 00:24:40.841012 kernel: Trampoline variant of Tasks RCU enabled. Apr 25 00:24:40.841018 kernel: Rude variant of Tasks RCU enabled. Apr 25 00:24:40.841023 kernel: Tracing variant of Tasks RCU enabled. Apr 25 00:24:40.841028 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 25 00:24:40.841043 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 25 00:24:40.841049 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 25 00:24:40.841056 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 25 00:24:40.841061 kernel: Console: colour dummy device 80x25 Apr 25 00:24:40.841067 kernel: printk: console [ttyS0] enabled Apr 25 00:24:40.841072 kernel: ACPI: Core revision 20230628 Apr 25 00:24:40.841078 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 25 00:24:40.841085 kernel: APIC: Switch to symmetric I/O mode setup Apr 25 00:24:40.841091 kernel: x2apic enabled Apr 25 00:24:40.841096 kernel: APIC: Switched APIC routing to: physical x2apic Apr 25 00:24:40.841102 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 25 00:24:40.841107 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 25 00:24:40.841113 kernel: kvm-guest: setup PV IPIs Apr 25 00:24:40.841118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 25 00:24:40.841124 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 25 00:24:40.841129 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 25 00:24:40.841136 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 25 00:24:40.841142 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 25 00:24:40.841147 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 25 00:24:40.841153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 25 00:24:40.841158 kernel: Spectre V2 : Mitigation: Retpolines Apr 25 00:24:40.841164 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 25 00:24:40.841169 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 25 00:24:40.841175 kernel: RETBleed: Vulnerable Apr 25 00:24:40.841181 kernel: Speculative Store Bypass: Vulnerable Apr 25 00:24:40.841187 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 25 00:24:40.841193 kernel: GDS: Unknown: Dependent on hypervisor status Apr 25 00:24:40.841198 kernel: active return thunk: its_return_thunk Apr 25 00:24:40.841204 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 25 00:24:40.841209 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 25 00:24:40.841215 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 25 00:24:40.841220 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 25 00:24:40.841225 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 25 00:24:40.841231 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 25 00:24:40.841238 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 25 00:24:40.841243 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 25 00:24:40.841249 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 25 00:24:40.841254 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 25 00:24:40.841259 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 25 00:24:40.841265 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 25 00:24:40.841270 kernel: Freeing SMP alternatives memory: 32K Apr 25 00:24:40.841276 kernel: pid_max: default: 32768 minimum: 301 Apr 25 00:24:40.841283 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 25 00:24:40.841288 kernel: landlock: Up and running. Apr 25 00:24:40.841294 kernel: SELinux: Initializing. Apr 25 00:24:40.841299 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 00:24:40.841304 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 00:24:40.841310 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 25 00:24:40.841333 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 25 00:24:40.841339 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 25 00:24:40.841344 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 25 00:24:40.841352 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 25 00:24:40.841357 kernel: signal: max sigframe size: 3632 Apr 25 00:24:40.841376 kernel: rcu: Hierarchical SRCU implementation. Apr 25 00:24:40.841386 kernel: rcu: Max phase no-delay instances is 400. Apr 25 00:24:40.841403 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 25 00:24:40.841410 kernel: smp: Bringing up secondary CPUs ... Apr 25 00:24:40.841415 kernel: smpboot: x86: Booting SMP configuration: Apr 25 00:24:40.841421 kernel: .... node #0, CPUs: #1 #2 #3 Apr 25 00:24:40.841426 kernel: smp: Brought up 1 node, 4 CPUs Apr 25 00:24:40.841433 kernel: smpboot: Max logical packages: 1 Apr 25 00:24:40.841438 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 25 00:24:40.841444 kernel: devtmpfs: initialized Apr 25 00:24:40.841449 kernel: x86/mm: Memory block size: 128MB Apr 25 00:24:40.841455 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 25 00:24:40.841460 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 25 00:24:40.841465 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 25 00:24:40.841471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 25 00:24:40.841477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 25 00:24:40.841483 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 25 00:24:40.841489 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 25 00:24:40.841494 kernel: pinctrl core: initialized pinctrl subsystem Apr 25 00:24:40.841500 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 25 00:24:40.841505 kernel: audit: initializing netlink subsys (disabled) Apr 25 00:24:40.841510 kernel: audit: type=2000 audit(1777076680.497:1): state=initialized audit_enabled=0 res=1 Apr 25 00:24:40.841516 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 25 00:24:40.841521 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 25 00:24:40.841526 kernel: cpuidle: using governor menu Apr 25 00:24:40.841533 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 25 00:24:40.841538 kernel: dca service started, version 1.12.1 Apr 25 00:24:40.841544 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 25 00:24:40.841549 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 25 00:24:40.841554 kernel: PCI: Using configuration type 1 for base access Apr 25 00:24:40.841560 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 25 00:24:40.841565 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 25 00:24:40.841571 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 25 00:24:40.841576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 25 00:24:40.841583 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 25 00:24:40.841588 kernel: ACPI: Added _OSI(Module Device) Apr 25 00:24:40.841594 kernel: ACPI: Added _OSI(Processor Device) Apr 25 00:24:40.841599 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 25 00:24:40.841604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 25 00:24:40.841610 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 25 00:24:40.841615 kernel: ACPI: Interpreter enabled Apr 25 00:24:40.841621 kernel: ACPI: PM: (supports S0 S3 S5) Apr 25 00:24:40.841626 kernel: ACPI: Using IOAPIC for interrupt routing Apr 25 00:24:40.841633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 25 00:24:40.841639 kernel: PCI: Using E820 reservations for host bridge windows Apr 25 00:24:40.841644 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 25 00:24:40.841650 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 25 00:24:40.841754 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 25 00:24:40.841816 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 25 00:24:40.841871 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 25 00:24:40.841880 kernel: PCI host bridge to bus 0000:00 Apr 25 00:24:40.841938 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 25 00:24:40.841988 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 25 00:24:40.842038 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 25 00:24:40.842086 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 25 00:24:40.842134 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 25 00:24:40.842182 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 25 00:24:40.842233 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 25 00:24:40.842301 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 25 00:24:40.842429 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 25 00:24:40.842488 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 25 00:24:40.842543 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 25 00:24:40.842596 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 25 00:24:40.842650 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 25 00:24:40.842708 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 25 00:24:40.842769 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 25 00:24:40.842824 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 25 00:24:40.842879 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 25 00:24:40.842935 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 25 00:24:40.842994 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 25 00:24:40.843051 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 25 00:24:40.843106 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 25 00:24:40.843162 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 25 00:24:40.843222 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 25 00:24:40.843277 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 25 00:24:40.843361 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 25 00:24:40.843449 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 25 00:24:40.843509 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 25 00:24:40.843585 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 25 00:24:40.843645 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 25 00:24:40.843759 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 25 00:24:40.843817 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 25 00:24:40.843872 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 25 00:24:40.843931 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 25 00:24:40.844002 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 25 00:24:40.844010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 25 00:24:40.844016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 25 00:24:40.844021 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 25 00:24:40.844027 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 25 00:24:40.844032 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 25 00:24:40.844037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 25 00:24:40.844042 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 25 00:24:40.844050 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 25 00:24:40.844055 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 25 00:24:40.844061 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 25 00:24:40.844066 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 25 00:24:40.844071 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 25 00:24:40.844077 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 25 00:24:40.844082 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 25 00:24:40.844088 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 25 00:24:40.844093 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 25 00:24:40.844100 kernel: iommu: Default domain type: Translated Apr 25 00:24:40.844105 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 25 00:24:40.844111 kernel: efivars: Registered efivars operations Apr 25 00:24:40.844116 kernel: PCI: Using ACPI for IRQ routing Apr 25 00:24:40.844122 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 25 00:24:40.844127 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 25 00:24:40.844132 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 25 00:24:40.844137 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 25 00:24:40.844143 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 25 00:24:40.844199 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 25 00:24:40.844253 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 25 00:24:40.844308 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 25 00:24:40.844346 kernel: vgaarb: loaded Apr 25 00:24:40.844352 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 25 00:24:40.844358 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 25 00:24:40.844377 kernel: clocksource: Switched to clocksource kvm-clock Apr 25 00:24:40.844387 kernel: VFS: Disk quotas dquot_6.6.0 Apr 25 00:24:40.844395 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 25 00:24:40.844403 kernel: pnp: PnP ACPI init Apr 25 00:24:40.844469 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 25 00:24:40.844477 kernel: pnp: PnP ACPI: found 6 devices Apr 25 00:24:40.844483 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 25 00:24:40.844488 kernel: NET: Registered PF_INET protocol family Apr 25 00:24:40.844494 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 25 00:24:40.844499 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 25 00:24:40.844505 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 25 00:24:40.844512 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 25 00:24:40.844518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 25 00:24:40.844523 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 25 00:24:40.844529 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 00:24:40.844535 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 00:24:40.844540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 25 00:24:40.844546 kernel: NET: Registered PF_XDP protocol family Apr 25 00:24:40.844602 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 25 00:24:40.844659 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 25 00:24:40.844714 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 25 00:24:40.844763 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 25 00:24:40.844845 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 25 00:24:40.844924 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 25 00:24:40.844975 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 25 00:24:40.845043 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 25 00:24:40.845051 kernel: PCI: CLS 0 bytes, default 64 Apr 25 00:24:40.845057 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 25 00:24:40.845064 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 25 00:24:40.845070 kernel: Initialise system trusted keyrings Apr 25 00:24:40.845076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 25 00:24:40.845081 kernel: Key type asymmetric registered Apr 25 00:24:40.845087 kernel: Asymmetric key parser 'x509' registered Apr 25 00:24:40.845093 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 25 00:24:40.845098 kernel: io scheduler mq-deadline registered Apr 25 00:24:40.845104 kernel: io scheduler kyber registered Apr 25 00:24:40.845109 kernel: io scheduler bfq registered Apr 25 00:24:40.845116 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 25 00:24:40.845122 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 25 00:24:40.845128 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 25 00:24:40.845133 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 25 00:24:40.845139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 25 00:24:40.845145 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 25 00:24:40.845150 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 25 00:24:40.845156 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 25 00:24:40.845161 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 25 00:24:40.845225 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 25 00:24:40.845233 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 25 00:24:40.845283 kernel: rtc_cmos 00:04: registered as rtc0 Apr 25 00:24:40.845358 kernel: rtc_cmos 00:04: setting system clock to 2026-04-25T00:24:40 UTC (1777076680) Apr 25 00:24:40.845439 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 25 00:24:40.845447 kernel: intel_pstate: CPU model not supported Apr 25 00:24:40.845453 kernel: efifb: probing for efifb Apr 25 00:24:40.845458 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 25 00:24:40.845466 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 25 00:24:40.845472 kernel: efifb: scrolling: redraw Apr 25 00:24:40.845477 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 25 00:24:40.845483 kernel: Console: switching to colour frame buffer device 100x37 Apr 25 00:24:40.845489 kernel: fb0: EFI VGA frame buffer device Apr 25 00:24:40.845506 kernel: pstore: Using crash dump compression: deflate Apr 25 00:24:40.845513 kernel: pstore: Registered efi_pstore as persistent store backend Apr 25 00:24:40.845519 kernel: NET: Registered PF_INET6 protocol family Apr 25 00:24:40.845524 kernel: Segment Routing with IPv6 Apr 25 00:24:40.845531 kernel: In-situ OAM (IOAM) with IPv6 Apr 25 00:24:40.845537 kernel: NET: Registered PF_PACKET protocol family Apr 25 00:24:40.845542 kernel: Key type dns_resolver registered Apr 25 00:24:40.845548 kernel: IPI shorthand broadcast: enabled Apr 25 00:24:40.845554 kernel: sched_clock: Marking stable (685008330, 198294683)->(929403359, -46100346) Apr 25 00:24:40.845559 kernel: registered taskstats version 1 Apr 25 00:24:40.845565 kernel: Loading compiled-in X.509 certificates Apr 25 00:24:40.845570 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 25 00:24:40.845576 kernel: Key type .fscrypt registered Apr 25 00:24:40.845583 kernel: Key type fscrypt-provisioning registered Apr 25 00:24:40.845588 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 25 00:24:40.845594 kernel: ima: Allocated hash algorithm: sha1 Apr 25 00:24:40.845599 kernel: ima: No architecture policies found Apr 25 00:24:40.845604 kernel: clk: Disabling unused clocks Apr 25 00:24:40.845610 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 25 00:24:40.845616 kernel: Write protecting the kernel read-only data: 36864k Apr 25 00:24:40.845621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 25 00:24:40.845627 kernel: Run /init as init process Apr 25 00:24:40.845634 kernel: with arguments: Apr 25 00:24:40.845639 kernel: /init Apr 25 00:24:40.845645 kernel: with environment: Apr 25 00:24:40.845651 kernel: HOME=/ Apr 25 00:24:40.845656 kernel: TERM=linux Apr 25 00:24:40.845664 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 00:24:40.845671 systemd[1]: Detected virtualization kvm. Apr 25 00:24:40.845680 systemd[1]: Detected architecture x86-64. Apr 25 00:24:40.845685 systemd[1]: Running in initrd. Apr 25 00:24:40.845691 systemd[1]: No hostname configured, using default hostname. Apr 25 00:24:40.845697 systemd[1]: Hostname set to . Apr 25 00:24:40.845703 systemd[1]: Initializing machine ID from VM UUID. Apr 25 00:24:40.845710 systemd[1]: Queued start job for default target initrd.target. Apr 25 00:24:40.845716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:24:40.845722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:24:40.845729 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 25 00:24:40.845735 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 00:24:40.845741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 25 00:24:40.845749 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 25 00:24:40.845757 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 25 00:24:40.845764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 25 00:24:40.845770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:24:40.845776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:24:40.845782 systemd[1]: Reached target paths.target - Path Units. Apr 25 00:24:40.845788 systemd[1]: Reached target slices.target - Slice Units. Apr 25 00:24:40.845794 systemd[1]: Reached target swap.target - Swaps. Apr 25 00:24:40.845800 systemd[1]: Reached target timers.target - Timer Units. Apr 25 00:24:40.845807 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 00:24:40.845813 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 00:24:40.845819 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 25 00:24:40.845825 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 25 00:24:40.845831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:24:40.845837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 00:24:40.845843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:24:40.845850 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 00:24:40.845856 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 25 00:24:40.845863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 00:24:40.845869 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 25 00:24:40.845875 systemd[1]: Starting systemd-fsck-usr.service... Apr 25 00:24:40.845881 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 00:24:40.845887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 00:24:40.845893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:40.845899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 25 00:24:40.845916 systemd-journald[192]: Collecting audit messages is disabled. Apr 25 00:24:40.845932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:24:40.845938 systemd[1]: Finished systemd-fsck-usr.service. Apr 25 00:24:40.845946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 00:24:40.845953 systemd-journald[192]: Journal started Apr 25 00:24:40.845968 systemd-journald[192]: Runtime Journal (/run/log/journal/9cb230b6ffef4e75a1beec72abee4600) is 6.0M, max 48.3M, 42.2M free. Apr 25 00:24:40.848196 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 00:24:40.851621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 00:24:40.855478 systemd-modules-load[193]: Inserted module 'overlay' Apr 25 00:24:40.855547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:40.859938 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 00:24:40.861708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:24:40.879471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 00:24:40.884308 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 25 00:24:40.884455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 00:24:40.885807 kernel: Bridge firewalling registered Apr 25 00:24:40.885064 systemd-modules-load[193]: Inserted module 'br_netfilter' Apr 25 00:24:40.885961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 00:24:40.887052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:24:40.898525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:24:40.899735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:24:40.902950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 00:24:40.904510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:24:40.907988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 25 00:24:40.919561 dracut-cmdline[231]: dracut-dracut-053 Apr 25 00:24:40.922380 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 25 00:24:40.927299 systemd-resolved[228]: Positive Trust Anchors: Apr 25 00:24:40.927306 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 00:24:40.927360 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 00:24:40.929178 systemd-resolved[228]: Defaulting to hostname 'linux'. Apr 25 00:24:40.939442 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 00:24:40.940034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:24:40.993374 kernel: SCSI subsystem initialized Apr 25 00:24:41.000353 kernel: Loading iSCSI transport class v2.0-870. Apr 25 00:24:41.010378 kernel: iscsi: registered transport (tcp) Apr 25 00:24:41.027707 kernel: iscsi: registered transport (qla4xxx) Apr 25 00:24:41.027742 kernel: QLogic iSCSI HBA Driver Apr 25 00:24:41.057658 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 25 00:24:41.069470 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 25 00:24:41.089382 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 25 00:24:41.089417 kernel: device-mapper: uevent: version 1.0.3 Apr 25 00:24:41.090777 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 25 00:24:41.126381 kernel: raid6: avx512x4 gen() 46278 MB/s Apr 25 00:24:41.143363 kernel: raid6: avx512x2 gen() 45812 MB/s Apr 25 00:24:41.160388 kernel: raid6: avx512x1 gen() 45254 MB/s Apr 25 00:24:41.177355 kernel: raid6: avx2x4 gen() 37967 MB/s Apr 25 00:24:41.194361 kernel: raid6: avx2x2 gen() 37598 MB/s Apr 25 00:24:41.211919 kernel: raid6: avx2x1 gen() 29311 MB/s Apr 25 00:24:41.211938 kernel: raid6: using algorithm avx512x4 gen() 46278 MB/s Apr 25 00:24:41.229899 kernel: raid6: .... xor() 10501 MB/s, rmw enabled Apr 25 00:24:41.229942 kernel: raid6: using avx512x2 recovery algorithm Apr 25 00:24:41.247355 kernel: xor: automatically using best checksumming function avx Apr 25 00:24:41.363430 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 25 00:24:41.372068 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 25 00:24:41.389472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:24:41.398560 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 25 00:24:41.401088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:24:41.415467 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 25 00:24:41.424809 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Apr 25 00:24:41.445220 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 00:24:41.459457 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 00:24:41.487153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:24:41.498522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 25 00:24:41.508256 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 25 00:24:41.512081 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 00:24:41.515927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:24:41.521995 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 25 00:24:41.522126 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 25 00:24:41.522191 kernel: cryptd: max_cpu_qlen set to 1000 Apr 25 00:24:41.522240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 00:24:41.528936 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 25 00:24:41.528963 kernel: GPT:9289727 != 19775487 Apr 25 00:24:41.528973 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 25 00:24:41.528982 kernel: GPT:9289727 != 19775487 Apr 25 00:24:41.530339 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 25 00:24:41.530361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:24:41.534763 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 25 00:24:41.541483 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 00:24:41.545124 kernel: AVX2 version of gcm_enc/dec engaged. Apr 25 00:24:41.546782 kernel: AES CTR mode by8 optimization enabled Apr 25 00:24:41.544216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:24:41.546971 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 00:24:41.548672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:24:41.559482 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (461) Apr 25 00:24:41.559505 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Apr 25 00:24:41.559685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:41.562951 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:41.569386 kernel: libata version 3.00 loaded. Apr 25 00:24:41.573185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:41.579623 kernel: ahci 0000:00:1f.2: version 3.0 Apr 25 00:24:41.579742 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 25 00:24:41.579751 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 25 00:24:41.579821 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 25 00:24:41.575140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 25 00:24:41.584836 kernel: scsi host0: ahci Apr 25 00:24:41.584998 kernel: scsi host1: ahci Apr 25 00:24:41.585071 kernel: scsi host2: ahci Apr 25 00:24:41.587355 kernel: scsi host3: ahci Apr 25 00:24:41.588772 kernel: scsi host4: ahci Apr 25 00:24:41.588940 kernel: scsi host5: ahci Apr 25 00:24:41.589343 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 25 00:24:41.591360 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 25 00:24:41.591394 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 25 00:24:41.593558 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 25 00:24:41.593570 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 25 00:24:41.595777 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 25 00:24:41.598537 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 25 00:24:41.603363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 25 00:24:41.608222 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 25 00:24:41.608785 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 25 00:24:41.614679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 25 00:24:41.630468 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 25 00:24:41.631147 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:24:41.631189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:41.634179 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:41.638257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:41.645187 disk-uuid[567]: Primary Header is updated. Apr 25 00:24:41.645187 disk-uuid[567]: Secondary Entries is updated. Apr 25 00:24:41.645187 disk-uuid[567]: Secondary Header is updated. Apr 25 00:24:41.648220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:24:41.651340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:24:41.652891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:41.657183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:24:41.657297 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 00:24:41.681262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:24:41.909350 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 25 00:24:41.909450 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 25 00:24:41.909459 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 25 00:24:41.910350 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 25 00:24:41.911357 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 25 00:24:41.914458 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 25 00:24:41.914477 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 25 00:24:41.914486 kernel: ata3.00: applying bridge limits Apr 25 00:24:41.915917 kernel: ata3.00: configured for UDMA/100 Apr 25 00:24:41.918352 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 25 00:24:41.957098 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 25 00:24:41.957278 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 25 00:24:41.971357 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 25 00:24:42.656363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 25 00:24:42.656425 disk-uuid[570]: The operation has completed successfully. Apr 25 00:24:42.675002 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 25 00:24:42.675096 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 25 00:24:42.687615 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 25 00:24:42.693544 sh[600]: Success Apr 25 00:24:42.704349 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 25 00:24:42.729075 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 25 00:24:42.744513 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 25 00:24:42.748443 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 25 00:24:42.757144 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 25 00:24:42.757171 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:24:42.757180 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 25 00:24:42.758525 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 25 00:24:42.759526 kernel: BTRFS info (device dm-0): using free space tree Apr 25 00:24:42.763944 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 25 00:24:42.766482 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 25 00:24:42.777458 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 25 00:24:42.779258 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 25 00:24:42.787084 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:24:42.787112 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:24:42.787121 kernel: BTRFS info (device vda6): using free space tree Apr 25 00:24:42.790336 kernel: BTRFS info (device vda6): auto enabling async discard Apr 25 00:24:42.796399 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 25 00:24:42.798737 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:24:42.805072 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 25 00:24:42.811513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 25 00:24:42.846685 ignition[701]: Ignition 2.19.0 Apr 25 00:24:42.846700 ignition[701]: Stage: fetch-offline Apr 25 00:24:42.846728 ignition[701]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:24:42.846734 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:24:42.846821 ignition[701]: parsed url from cmdline: "" Apr 25 00:24:42.846824 ignition[701]: no config URL provided Apr 25 00:24:42.846828 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 00:24:42.846833 ignition[701]: no config at "/usr/lib/ignition/user.ign" Apr 25 00:24:42.846849 ignition[701]: op(1): [started] loading QEMU firmware config module Apr 25 00:24:42.846853 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 25 00:24:42.860023 ignition[701]: op(1): [finished] loading QEMU firmware config module Apr 25 00:24:42.863626 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 00:24:42.875467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 00:24:42.892033 systemd-networkd[788]: lo: Link UP Apr 25 00:24:42.892053 systemd-networkd[788]: lo: Gained carrier Apr 25 00:24:42.892897 systemd-networkd[788]: Enumeration completed Apr 25 00:24:42.893081 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 00:24:42.893347 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:24:42.893349 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 00:24:42.894119 systemd-networkd[788]: eth0: Link UP Apr 25 00:24:42.894121 systemd-networkd[788]: eth0: Gained carrier Apr 25 00:24:42.894126 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:24:42.895352 systemd[1]: Reached target network.target - Network. Apr 25 00:24:42.921371 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 25 00:24:42.965786 ignition[701]: parsing config with SHA512: be639628ce2190221cf301897100b5224deff4f17b4e9ebdc7dc18ade0ef892688de31468bab472b5de21330c0a36f7eeddaac9c7d0a39d0351a0fe261e04829 Apr 25 00:24:42.970668 unknown[701]: fetched base config from "system" Apr 25 00:24:42.970676 unknown[701]: fetched user config from "qemu" Apr 25 00:24:42.970974 ignition[701]: fetch-offline: fetch-offline passed Apr 25 00:24:42.971017 ignition[701]: Ignition finished successfully Apr 25 00:24:42.974347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 00:24:42.976841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 25 00:24:42.986488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 25 00:24:42.997310 ignition[792]: Ignition 2.19.0 Apr 25 00:24:42.997354 ignition[792]: Stage: kargs Apr 25 00:24:42.997491 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:24:42.997498 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:24:42.998082 ignition[792]: kargs: kargs passed Apr 25 00:24:42.998109 ignition[792]: Ignition finished successfully Apr 25 00:24:43.001956 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 25 00:24:43.013493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 25 00:24:43.023021 ignition[800]: Ignition 2.19.0 Apr 25 00:24:43.023035 ignition[800]: Stage: disks Apr 25 00:24:43.023158 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 25 00:24:43.023165 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:24:43.025313 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 25 00:24:43.023824 ignition[800]: disks: disks passed Apr 25 00:24:43.026794 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 25 00:24:43.023856 ignition[800]: Ignition finished successfully Apr 25 00:24:43.028983 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 25 00:24:43.031772 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 00:24:43.033858 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 00:24:43.036682 systemd[1]: Reached target basic.target - Basic System. Apr 25 00:24:43.051473 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 25 00:24:43.060768 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 25 00:24:43.065410 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 25 00:24:43.068271 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 25 00:24:43.144353 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 25 00:24:43.145050 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 25 00:24:43.146781 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 25 00:24:43.159424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 00:24:43.161524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 25 00:24:43.163070 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 25 00:24:43.163097 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 25 00:24:43.163112 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 00:24:43.167364 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Apr 25 00:24:43.170238 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:24:43.170250 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:24:43.170258 kernel: BTRFS info (device vda6): using free space tree Apr 25 00:24:43.174345 kernel: BTRFS info (device vda6): auto enabling async discard Apr 25 00:24:43.179468 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 00:24:43.181036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 25 00:24:43.183113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 25 00:24:43.211943 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 25 00:24:43.216542 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 25 00:24:43.219136 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 25 00:24:43.222813 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 25 00:24:43.281867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 25 00:24:43.298457 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 25 00:24:43.299805 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 25 00:24:43.309358 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:24:43.319767 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 25 00:24:43.328524 ignition[933]: INFO : Ignition 2.19.0 Apr 25 00:24:43.328524 ignition[933]: INFO : Stage: mount Apr 25 00:24:43.330490 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:24:43.330490 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:24:43.330490 ignition[933]: INFO : mount: mount passed Apr 25 00:24:43.330490 ignition[933]: INFO : Ignition finished successfully Apr 25 00:24:43.333792 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 25 00:24:43.344466 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 25 00:24:43.755753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 25 00:24:43.764596 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 00:24:43.770349 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 25 00:24:43.773098 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 25 00:24:43.773117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 25 00:24:43.773126 kernel: BTRFS info (device vda6): using free space tree Apr 25 00:24:43.777348 kernel: BTRFS info (device vda6): auto enabling async discard Apr 25 00:24:43.777760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 00:24:43.803535 ignition[962]: INFO : Ignition 2.19.0 Apr 25 00:24:43.803535 ignition[962]: INFO : Stage: files Apr 25 00:24:43.805704 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:24:43.805704 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:24:43.805704 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 25 00:24:43.805704 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 25 00:24:43.805704 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 25 00:24:43.813533 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 25 00:24:43.815313 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 25 00:24:43.817251 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 25 00:24:43.817251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 25 00:24:43.817251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 25 00:24:43.817251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 25 00:24:43.817251 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 25 00:24:43.815679 unknown[962]: wrote ssh authorized keys file for user: core Apr 25 00:24:43.847856 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 25 00:24:43.910508 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 25 00:24:43.910508 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 25 00:24:43.915472 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 25 00:24:44.120712 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 25 00:24:44.181541 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 25 00:24:44.181541 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:24:44.186248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 25 00:24:44.427051 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 25 00:24:44.644472 systemd-networkd[788]: eth0: Gained IPv6LL Apr 25 00:24:44.657121 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 25 00:24:44.657121 ignition[962]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 25 00:24:44.661803 ignition[962]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 25 00:24:44.689687 ignition[962]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 25 00:24:44.693097 ignition[962]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 25 00:24:44.695104 ignition[962]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 25 00:24:44.695104 ignition[962]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 25 00:24:44.695104 ignition[962]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 25 00:24:44.695104 ignition[962]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 25 00:24:44.695104 ignition[962]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 25 00:24:44.695104 ignition[962]: INFO : files: files passed Apr 25 00:24:44.695104 ignition[962]: INFO : Ignition finished successfully Apr 25 00:24:44.699149 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 25 00:24:44.717489 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 25 00:24:44.720863 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 25 00:24:44.721820 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 25 00:24:44.721893 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 25 00:24:44.732665 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Apr 25 00:24:44.736071 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:24:44.736071 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:24:44.740068 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 00:24:44.743000 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 00:24:44.746261 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 25 00:24:44.761506 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 25 00:24:44.777179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 25 00:24:44.777268 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 25 00:24:44.780466 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 25 00:24:44.783144 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 25 00:24:44.785655 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 25 00:24:44.786189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 25 00:24:44.801265 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 00:24:44.802669 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 25 00:24:44.813921 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:24:44.814863 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:24:44.817866 systemd[1]: Stopped target timers.target - Timer Units. Apr 25 00:24:44.820729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 25 00:24:44.820833 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 00:24:44.825232 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 25 00:24:44.825925 systemd[1]: Stopped target basic.target - Basic System. Apr 25 00:24:44.829823 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 25 00:24:44.831768 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 00:24:44.834627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 25 00:24:44.837200 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 25 00:24:44.839907 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 00:24:44.842404 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 25 00:24:44.845404 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 25 00:24:44.847901 systemd[1]: Stopped target swap.target - Swaps. Apr 25 00:24:44.850263 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 25 00:24:44.850404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 25 00:24:44.854304 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:24:44.854980 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:24:44.858796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 25 00:24:44.862174 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:24:44.862799 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 25 00:24:44.862940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 25 00:24:44.867948 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 25 00:24:44.868046 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 00:24:44.870675 systemd[1]: Stopped target paths.target - Path Units. Apr 25 00:24:44.872949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 25 00:24:44.877414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:24:44.878046 systemd[1]: Stopped target slices.target - Slice Units. Apr 25 00:24:44.881374 systemd[1]: Stopped target sockets.target - Socket Units. Apr 25 00:24:44.883775 systemd[1]: iscsid.socket: Deactivated successfully. Apr 25 00:24:44.883848 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 00:24:44.885839 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 25 00:24:44.885904 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 00:24:44.888806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 25 00:24:44.888894 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 00:24:44.891604 systemd[1]: ignition-files.service: Deactivated successfully. Apr 25 00:24:44.891678 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 25 00:24:44.911647 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 25 00:24:44.914725 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 25 00:24:44.915908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 25 00:24:44.916031 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:24:44.918827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 25 00:24:44.918893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 00:24:44.927766 ignition[1016]: INFO : Ignition 2.19.0 Apr 25 00:24:44.927766 ignition[1016]: INFO : Stage: umount Apr 25 00:24:44.927766 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 00:24:44.927766 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 25 00:24:44.927766 ignition[1016]: INFO : umount: umount passed Apr 25 00:24:44.927766 ignition[1016]: INFO : Ignition finished successfully Apr 25 00:24:44.923901 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 25 00:24:44.924027 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 25 00:24:44.928283 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 25 00:24:44.928403 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 25 00:24:44.930441 systemd[1]: Stopped target network.target - Network. Apr 25 00:24:44.932406 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 25 00:24:44.932446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 25 00:24:44.935108 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 25 00:24:44.935140 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 25 00:24:44.938236 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 25 00:24:44.938268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 25 00:24:44.941379 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 25 00:24:44.941422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 25 00:24:44.944214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 25 00:24:44.945635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 25 00:24:44.946900 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 25 00:24:44.949958 systemd-networkd[788]: eth0: DHCPv6 lease lost Apr 25 00:24:44.959370 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 25 00:24:44.959465 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 25 00:24:44.962026 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 25 00:24:44.962109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 25 00:24:44.964887 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 25 00:24:44.964921 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:24:44.973613 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 25 00:24:44.974708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 25 00:24:44.974763 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 00:24:44.977238 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 00:24:44.977282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:24:44.978362 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 25 00:24:44.978423 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 25 00:24:44.982977 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 25 00:24:44.983021 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:24:44.983588 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:24:44.988646 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 25 00:24:44.988721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 25 00:24:44.991909 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 25 00:24:44.991968 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 25 00:24:45.000027 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 25 00:24:45.000135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 25 00:24:45.015616 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 25 00:24:45.016971 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:24:45.020222 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 25 00:24:45.020270 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 25 00:24:45.022859 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 25 00:24:45.022882 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:24:45.025421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 25 00:24:45.025456 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 25 00:24:45.028413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 25 00:24:45.028442 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 25 00:24:45.031605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 00:24:45.031641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 00:24:45.050545 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 25 00:24:45.051185 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 25 00:24:45.051229 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:24:45.054136 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 25 00:24:45.054165 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 00:24:45.056961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 25 00:24:45.056991 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:24:45.061120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:24:45.061150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:45.064269 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 25 00:24:45.064373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 25 00:24:45.067684 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 25 00:24:45.068673 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 25 00:24:45.079366 systemd[1]: Switching root. Apr 25 00:24:45.106356 systemd-journald[192]: Journal stopped Apr 25 00:24:45.778825 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Apr 25 00:24:45.778875 kernel: SELinux: policy capability network_peer_controls=1 Apr 25 00:24:45.778891 kernel: SELinux: policy capability open_perms=1 Apr 25 00:24:45.778902 kernel: SELinux: policy capability extended_socket_class=1 Apr 25 00:24:45.778910 kernel: SELinux: policy capability always_check_network=0 Apr 25 00:24:45.778921 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 25 00:24:45.778928 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 25 00:24:45.778939 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 25 00:24:45.778946 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 25 00:24:45.778954 kernel: audit: type=1403 audit(1777076685.259:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 25 00:24:45.778966 systemd[1]: Successfully loaded SELinux policy in 34.819ms. Apr 25 00:24:45.778976 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.606ms. Apr 25 00:24:45.778985 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 00:24:45.778993 systemd[1]: Detected virtualization kvm. Apr 25 00:24:45.779003 systemd[1]: Detected architecture x86-64. Apr 25 00:24:45.779010 systemd[1]: Detected first boot. Apr 25 00:24:45.779018 systemd[1]: Initializing machine ID from VM UUID. Apr 25 00:24:45.779026 zram_generator::config[1077]: No configuration found. Apr 25 00:24:45.779035 systemd[1]: Populated /etc with preset unit settings. Apr 25 00:24:45.779043 systemd[1]: Queued start job for default target multi-user.target. Apr 25 00:24:45.779051 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 25 00:24:45.779059 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 25 00:24:45.779069 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 25 00:24:45.779077 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 25 00:24:45.779085 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 25 00:24:45.779092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 25 00:24:45.779101 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 25 00:24:45.779113 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 25 00:24:45.779120 systemd[1]: Created slice user.slice - User and Session Slice. Apr 25 00:24:45.779128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 00:24:45.779136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 00:24:45.779148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 25 00:24:45.779157 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 25 00:24:45.779165 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 25 00:24:45.779172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 00:24:45.779180 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 25 00:24:45.779187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 00:24:45.779195 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 25 00:24:45.779204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 00:24:45.779212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 00:24:45.779221 systemd[1]: Reached target slices.target - Slice Units. Apr 25 00:24:45.779229 systemd[1]: Reached target swap.target - Swaps. Apr 25 00:24:45.779237 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 25 00:24:45.779244 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 25 00:24:45.779252 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 25 00:24:45.779259 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 25 00:24:45.779267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 00:24:45.779276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 00:24:45.779285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 00:24:45.779293 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 25 00:24:45.779301 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 25 00:24:45.779309 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 25 00:24:45.779379 systemd[1]: Mounting media.mount - External Media Directory... Apr 25 00:24:45.779390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:45.779413 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 25 00:24:45.779422 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 25 00:24:45.779429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 25 00:24:45.779439 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 25 00:24:45.779447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:24:45.779454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 00:24:45.779462 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 25 00:24:45.779470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:24:45.779478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 00:24:45.779486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:24:45.779493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 25 00:24:45.779503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:24:45.779511 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 25 00:24:45.779519 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 25 00:24:45.779529 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 25 00:24:45.779537 kernel: fuse: init (API version 7.39) Apr 25 00:24:45.779544 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 00:24:45.779552 kernel: loop: module loaded Apr 25 00:24:45.779559 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 00:24:45.779567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 25 00:24:45.779576 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 25 00:24:45.779584 kernel: ACPI: bus type drm_connector registered Apr 25 00:24:45.779591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 00:24:45.779610 systemd-journald[1173]: Collecting audit messages is disabled. Apr 25 00:24:45.779627 systemd-journald[1173]: Journal started Apr 25 00:24:45.779643 systemd-journald[1173]: Runtime Journal (/run/log/journal/9cb230b6ffef4e75a1beec72abee4600) is 6.0M, max 48.3M, 42.2M free. Apr 25 00:24:45.783347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:45.787230 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 00:24:45.787900 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 25 00:24:45.789353 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 25 00:24:45.790820 systemd[1]: Mounted media.mount - External Media Directory. Apr 25 00:24:45.792140 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 25 00:24:45.793605 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 25 00:24:45.795071 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 25 00:24:45.796554 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 25 00:24:45.798259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 00:24:45.800014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 25 00:24:45.800131 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 25 00:24:45.801803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:24:45.801916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:24:45.803545 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 00:24:45.803654 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 00:24:45.805155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:24:45.805268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:24:45.806980 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 25 00:24:45.807092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 25 00:24:45.808638 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:24:45.808768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:24:45.810666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 00:24:45.812315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 25 00:24:45.814207 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 25 00:24:45.820221 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 00:24:45.824556 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 25 00:24:45.830409 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 25 00:24:45.832675 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 25 00:24:45.834129 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 25 00:24:45.836170 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 25 00:24:45.838492 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 25 00:24:45.840068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:24:45.840795 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 25 00:24:45.842218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:24:45.842936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:24:45.846732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 00:24:45.853934 systemd-journald[1173]: Time spent on flushing to /var/log/journal/9cb230b6ffef4e75a1beec72abee4600 is 9.074ms for 992 entries. Apr 25 00:24:45.853934 systemd-journald[1173]: System Journal (/var/log/journal/9cb230b6ffef4e75a1beec72abee4600) is 8.0M, max 195.6M, 187.6M free. Apr 25 00:24:45.885184 systemd-journald[1173]: Received client request to flush runtime journal. Apr 25 00:24:45.852280 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 25 00:24:45.856176 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 25 00:24:45.858301 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 25 00:24:45.860944 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 25 00:24:45.865127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:24:45.867098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 25 00:24:45.870608 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 25 00:24:45.877173 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 25 00:24:45.877182 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 25 00:24:45.879869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 00:24:45.887520 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 25 00:24:45.889075 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 25 00:24:45.906249 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 25 00:24:45.912564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 00:24:45.922625 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Apr 25 00:24:45.922646 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Apr 25 00:24:45.925583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 00:24:46.186773 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 25 00:24:46.199577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 00:24:46.215860 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Apr 25 00:24:46.229638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 00:24:46.239533 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 00:24:46.250475 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 25 00:24:46.259577 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 25 00:24:46.262534 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1246) Apr 25 00:24:46.287104 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 25 00:24:46.300307 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 25 00:24:46.302240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 25 00:24:46.308345 kernel: ACPI: button: Power Button [PWRF] Apr 25 00:24:46.327455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 25 00:24:46.338860 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 25 00:24:46.339047 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 25 00:24:46.339134 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 25 00:24:46.340935 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 25 00:24:46.343073 systemd-networkd[1252]: lo: Link UP Apr 25 00:24:46.343090 systemd-networkd[1252]: lo: Gained carrier Apr 25 00:24:46.343906 systemd-networkd[1252]: Enumeration completed Apr 25 00:24:46.344085 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 00:24:46.344291 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:24:46.344294 systemd-networkd[1252]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 00:24:46.344878 systemd-networkd[1252]: eth0: Link UP Apr 25 00:24:46.344880 systemd-networkd[1252]: eth0: Gained carrier Apr 25 00:24:46.344888 systemd-networkd[1252]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 00:24:46.352347 kernel: mousedev: PS/2 mouse device common for all mice Apr 25 00:24:46.358562 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 25 00:24:46.363417 systemd-networkd[1252]: eth0: DHCPv4 address 10.0.0.3/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 25 00:24:46.363564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:46.365824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 00:24:46.365965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:46.368513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 00:24:46.421960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 00:24:46.450051 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 25 00:24:46.460611 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 25 00:24:46.466681 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 00:24:46.495616 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 25 00:24:46.497730 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 00:24:46.510533 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 25 00:24:46.514132 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 00:24:46.540677 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 25 00:24:46.542900 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 25 00:24:46.544660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 25 00:24:46.544678 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 00:24:46.546017 systemd[1]: Reached target machines.target - Containers. Apr 25 00:24:46.547974 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 25 00:24:46.567471 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 25 00:24:46.570261 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 25 00:24:46.571819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:24:46.572538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 25 00:24:46.576492 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 25 00:24:46.577902 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 25 00:24:46.581628 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 25 00:24:46.588856 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 25 00:24:46.591338 kernel: loop0: detected capacity change from 0 to 228704 Apr 25 00:24:46.600061 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 25 00:24:46.600579 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 25 00:24:46.604344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 25 00:24:46.636364 kernel: loop1: detected capacity change from 0 to 140768 Apr 25 00:24:46.669340 kernel: loop2: detected capacity change from 0 to 142488 Apr 25 00:24:46.702347 kernel: loop3: detected capacity change from 0 to 228704 Apr 25 00:24:46.709366 kernel: loop4: detected capacity change from 0 to 140768 Apr 25 00:24:46.717340 kernel: loop5: detected capacity change from 0 to 142488 Apr 25 00:24:46.724184 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 25 00:24:46.724535 (sd-merge)[1315]: Merged extensions into '/usr'. Apr 25 00:24:46.726930 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Apr 25 00:24:46.726947 systemd[1]: Reloading... Apr 25 00:24:46.760366 zram_generator::config[1343]: No configuration found. Apr 25 00:24:46.767524 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 25 00:24:46.834233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:24:46.870713 systemd[1]: Reloading finished in 143 ms. Apr 25 00:24:46.883273 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 25 00:24:46.885355 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 25 00:24:46.899464 systemd[1]: Starting ensure-sysext.service... Apr 25 00:24:46.901613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 00:24:46.904521 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Apr 25 00:24:46.904540 systemd[1]: Reloading... Apr 25 00:24:46.916912 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 25 00:24:46.917119 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 25 00:24:46.917693 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 25 00:24:46.917862 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 25 00:24:46.917902 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 25 00:24:46.923081 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 00:24:46.923157 systemd-tmpfiles[1388]: Skipping /boot Apr 25 00:24:46.930782 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 00:24:46.930851 systemd-tmpfiles[1388]: Skipping /boot Apr 25 00:24:46.936360 zram_generator::config[1421]: No configuration found. Apr 25 00:24:47.002679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:24:47.039076 systemd[1]: Reloading finished in 134 ms. Apr 25 00:24:47.051561 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 00:24:47.073860 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 00:24:47.076302 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 25 00:24:47.078681 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 25 00:24:47.081421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 00:24:47.086288 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 25 00:24:47.091424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:47.091678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:24:47.092517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:24:47.102083 augenrules[1483]: No rules Apr 25 00:24:47.109528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:24:47.111993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:24:47.113642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:24:47.113775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:47.114561 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 00:24:47.116716 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 25 00:24:47.118723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:24:47.118825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:24:47.120762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:24:47.120870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:24:47.122972 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:24:47.123089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:24:47.126973 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 25 00:24:47.132805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:47.132940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:24:47.133195 systemd-resolved[1466]: Positive Trust Anchors: Apr 25 00:24:47.133221 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 00:24:47.133246 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 00:24:47.136223 systemd-resolved[1466]: Defaulting to hostname 'linux'. Apr 25 00:24:47.137555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:24:47.139760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:24:47.141927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:24:47.143281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:24:47.144135 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 25 00:24:47.145491 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 25 00:24:47.145663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:47.146239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 00:24:47.147222 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 25 00:24:47.150071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:24:47.150181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:24:47.152012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:24:47.152105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:24:47.154944 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 25 00:24:47.156780 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:24:47.156907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:24:47.166257 systemd[1]: Finished ensure-sysext.service. Apr 25 00:24:47.168494 systemd[1]: Reached target network.target - Network. Apr 25 00:24:47.169698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 00:24:47.171273 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:47.171515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 00:24:47.180439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 00:24:47.182782 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 00:24:47.184878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 00:24:47.188481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 00:24:47.189103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 00:24:47.190230 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 25 00:24:47.190783 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 25 00:24:47.190804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 25 00:24:47.191142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 00:24:47.191239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 00:24:47.193313 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 00:24:47.193515 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 00:24:47.195105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 00:24:47.195215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 00:24:47.196957 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 00:24:47.197081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 00:24:47.199145 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 00:24:47.199196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 00:24:47.233986 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 25 00:24:47.235959 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 00:24:48.016562 systemd-resolved[1466]: Clock change detected. Flushing caches. Apr 25 00:24:48.016596 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 25 00:24:48.016625 systemd-timesyncd[1527]: Initial clock synchronization to Sat 2026-04-25 00:24:48.016504 UTC. Apr 25 00:24:48.017719 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 25 00:24:48.019319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 25 00:24:48.020930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 25 00:24:48.022539 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 25 00:24:48.022566 systemd[1]: Reached target paths.target - Path Units. Apr 25 00:24:48.023719 systemd[1]: Reached target time-set.target - System Time Set. Apr 25 00:24:48.025106 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 25 00:24:48.026546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 25 00:24:48.028153 systemd[1]: Reached target timers.target - Timer Units. Apr 25 00:24:48.030060 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 25 00:24:48.032594 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 25 00:24:48.034734 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 25 00:24:48.040200 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 25 00:24:48.041718 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 00:24:48.042962 systemd[1]: Reached target basic.target - Basic System. Apr 25 00:24:48.044258 systemd[1]: System is tainted: cgroupsv1 Apr 25 00:24:48.044296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 25 00:24:48.044309 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 25 00:24:48.045143 systemd[1]: Starting containerd.service - containerd container runtime... Apr 25 00:24:48.047238 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 25 00:24:48.049171 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 25 00:24:48.052083 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 25 00:24:48.053683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 25 00:24:48.054897 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 25 00:24:48.056892 jq[1540]: false Apr 25 00:24:48.057240 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 25 00:24:48.061401 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 25 00:24:48.066622 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 25 00:24:48.069622 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 25 00:24:48.071278 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 25 00:24:48.073603 systemd[1]: Starting update-engine.service - Update Engine... Apr 25 00:24:48.077567 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 25 00:24:48.080459 extend-filesystems[1542]: Found loop3 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found loop4 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found loop5 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found sr0 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda1 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda2 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda3 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found usr Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda4 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda6 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda7 Apr 25 00:24:48.080459 extend-filesystems[1542]: Found vda9 Apr 25 00:24:48.080459 extend-filesystems[1542]: Checking size of /dev/vda9 Apr 25 00:24:48.087922 dbus-daemon[1539]: [system] SELinux support is enabled Apr 25 00:24:48.081900 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 25 00:24:48.108822 extend-filesystems[1542]: Resized partition /dev/vda9 Apr 25 00:24:48.112949 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 25 00:24:48.112993 update_engine[1557]: I20260425 00:24:48.104266 1557 main.cc:92] Flatcar Update Engine starting Apr 25 00:24:48.082052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 25 00:24:48.113202 extend-filesystems[1574]: resize2fs 1.47.1 (20-May-2024) Apr 25 00:24:48.116363 jq[1560]: true Apr 25 00:24:48.082221 systemd[1]: motdgen.service: Deactivated successfully. Apr 25 00:24:48.082353 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 25 00:24:48.088494 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 25 00:24:48.116719 jq[1571]: true Apr 25 00:24:48.097767 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 25 00:24:48.097930 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 25 00:24:48.107750 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 25 00:24:48.107769 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 25 00:24:48.113246 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 25 00:24:48.113259 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 25 00:24:48.116101 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 25 00:24:48.119721 systemd[1]: Started update-engine.service - Update Engine. Apr 25 00:24:48.121732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 25 00:24:48.123609 tar[1568]: linux-amd64/LICENSE Apr 25 00:24:48.123609 tar[1568]: linux-amd64/helm Apr 25 00:24:48.122635 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 25 00:24:48.125018 update_engine[1557]: I20260425 00:24:48.124673 1557 update_check_scheduler.cc:74] Next update check in 3m39s Apr 25 00:24:48.131478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1249) Apr 25 00:24:48.136694 systemd-logind[1554]: Watching system buttons on /dev/input/event1 (Power Button) Apr 25 00:24:48.136709 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 25 00:24:48.138490 systemd-logind[1554]: New seat seat0. Apr 25 00:24:48.140738 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 25 00:24:48.147006 systemd[1]: Started systemd-logind.service - User Login Management. Apr 25 00:24:48.169099 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 25 00:24:48.169099 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 25 00:24:48.169099 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 25 00:24:48.175943 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Apr 25 00:24:48.169185 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 25 00:24:48.169848 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 25 00:24:48.170023 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 25 00:24:48.183619 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Apr 25 00:24:48.184564 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 25 00:24:48.186732 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 25 00:24:48.270227 containerd[1573]: time="2026-04-25T00:24:48.270160486Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 25 00:24:48.288949 containerd[1573]: time="2026-04-25T00:24:48.288739865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290064 containerd[1573]: time="2026-04-25T00:24:48.290044315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290122 containerd[1573]: time="2026-04-25T00:24:48.290114583Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 25 00:24:48.290355 containerd[1573]: time="2026-04-25T00:24:48.290147941Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 25 00:24:48.290355 containerd[1573]: time="2026-04-25T00:24:48.290264308Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 25 00:24:48.290355 containerd[1573]: time="2026-04-25T00:24:48.290275756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290355 containerd[1573]: time="2026-04-25T00:24:48.290310865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290355 containerd[1573]: time="2026-04-25T00:24:48.290319763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290626 containerd[1573]: time="2026-04-25T00:24:48.290612180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290664 containerd[1573]: time="2026-04-25T00:24:48.290657793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290695 containerd[1573]: time="2026-04-25T00:24:48.290688564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290719 containerd[1573]: time="2026-04-25T00:24:48.290713947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290806 containerd[1573]: time="2026-04-25T00:24:48.290798549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.290960 containerd[1573]: time="2026-04-25T00:24:48.290950719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 25 00:24:48.291086 containerd[1573]: time="2026-04-25T00:24:48.291076420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 00:24:48.291122 containerd[1573]: time="2026-04-25T00:24:48.291115468Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 25 00:24:48.291196 containerd[1573]: time="2026-04-25T00:24:48.291188167Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 25 00:24:48.291260 containerd[1573]: time="2026-04-25T00:24:48.291253042Z" level=info msg="metadata content store policy set" policy=shared Apr 25 00:24:48.295892 containerd[1573]: time="2026-04-25T00:24:48.295878016Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.295949332Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.295963309Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.295975052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.295984700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296071636Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296264686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296321773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296332260Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296340883Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296352238Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296360706Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296369315Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296378815Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.297943 containerd[1573]: time="2026-04-25T00:24:48.296388730Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296400205Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296419041Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296465321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296482164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296492462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296503617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296512546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296535804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296549275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296558350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296567922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296577001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296587141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298176 containerd[1573]: time="2026-04-25T00:24:48.296595431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296603692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296611460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296622532Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296636535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296645824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296652923Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296682370Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296694299Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296701680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296709825Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296716408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296724647Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296731632Z" level=info msg="NRI interface is disabled by configuration." Apr 25 00:24:48.298349 containerd[1573]: time="2026-04-25T00:24:48.296738535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 25 00:24:48.298552 containerd[1573]: time="2026-04-25T00:24:48.296922399Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 25 00:24:48.298552 containerd[1573]: time="2026-04-25T00:24:48.296967870Z" level=info msg="Connect containerd service" Apr 25 00:24:48.298552 containerd[1573]: time="2026-04-25T00:24:48.296993565Z" level=info msg="using legacy CRI server" Apr 25 00:24:48.298552 containerd[1573]: time="2026-04-25T00:24:48.296999047Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 25 00:24:48.298552 containerd[1573]: time="2026-04-25T00:24:48.297068688Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 25 00:24:48.298552 containerd[1573]: time="2026-04-25T00:24:48.297474968Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 00:24:48.298992 containerd[1573]: time="2026-04-25T00:24:48.298965950Z" level=info msg="Start subscribing containerd event" Apr 25 00:24:48.299057 containerd[1573]: time="2026-04-25T00:24:48.299043701Z" level=info msg="Start recovering state" Apr 25 00:24:48.299256 containerd[1573]: time="2026-04-25T00:24:48.299246253Z" level=info msg="Start event monitor" Apr 25 00:24:48.299293 containerd[1573]: time="2026-04-25T00:24:48.299285017Z" level=info msg="Start snapshots syncer" Apr 25 00:24:48.299318 containerd[1573]: time="2026-04-25T00:24:48.299313472Z" level=info msg="Start cni network conf syncer for default" Apr 25 00:24:48.299342 containerd[1573]: time="2026-04-25T00:24:48.299337860Z" level=info msg="Start streaming server" Apr 25 00:24:48.300208 containerd[1573]: time="2026-04-25T00:24:48.300191801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 25 00:24:48.300297 containerd[1573]: time="2026-04-25T00:24:48.300287894Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 25 00:24:48.300363 containerd[1573]: time="2026-04-25T00:24:48.300355332Z" level=info msg="containerd successfully booted in 0.030863s" Apr 25 00:24:48.300567 systemd[1]: Started containerd.service - containerd container runtime. Apr 25 00:24:48.536849 tar[1568]: linux-amd64/README.md Apr 25 00:24:48.550816 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 25 00:24:48.634628 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 25 00:24:48.652796 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 25 00:24:48.667737 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 25 00:24:48.672840 systemd[1]: issuegen.service: Deactivated successfully. Apr 25 00:24:48.673059 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 25 00:24:48.675750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 25 00:24:48.684715 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 25 00:24:48.697734 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 25 00:24:48.700296 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 25 00:24:48.701897 systemd[1]: Reached target getty.target - Login Prompts. Apr 25 00:24:48.816963 systemd-networkd[1252]: eth0: Gained IPv6LL Apr 25 00:24:48.819331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 25 00:24:48.821575 systemd[1]: Reached target network-online.target - Network is Online. Apr 25 00:24:48.833635 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 25 00:24:48.836262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:24:48.838459 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 25 00:24:48.851873 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 25 00:24:48.852068 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 25 00:24:48.853975 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 25 00:24:48.856143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 25 00:24:49.438336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:24:49.440195 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 25 00:24:49.441741 systemd[1]: Startup finished in 5.375s (kernel) + 3.436s (userspace) = 8.811s. Apr 25 00:24:49.442158 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:24:49.836146 kubelet[1676]: E0425 00:24:49.836020 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:24:49.838142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:24:49.838286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:24:53.969070 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 25 00:24:53.980661 systemd[1]: Started sshd@0-10.0.0.3:22-10.0.0.1:57884.service - OpenSSH per-connection server daemon (10.0.0.1:57884). Apr 25 00:24:54.022736 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 57884 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.024284 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.029898 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 25 00:24:54.039657 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 25 00:24:54.041129 systemd-logind[1554]: New session 1 of user core. Apr 25 00:24:54.048724 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 25 00:24:54.050394 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 25 00:24:54.055866 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 25 00:24:54.120779 systemd[1695]: Queued start job for default target default.target. Apr 25 00:24:54.121057 systemd[1695]: Created slice app.slice - User Application Slice. Apr 25 00:24:54.121070 systemd[1695]: Reached target paths.target - Paths. Apr 25 00:24:54.121078 systemd[1695]: Reached target timers.target - Timers. Apr 25 00:24:54.132512 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 25 00:24:54.137896 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 25 00:24:54.137948 systemd[1695]: Reached target sockets.target - Sockets. Apr 25 00:24:54.137956 systemd[1695]: Reached target basic.target - Basic System. Apr 25 00:24:54.137979 systemd[1695]: Reached target default.target - Main User Target. Apr 25 00:24:54.137996 systemd[1695]: Startup finished in 78ms. Apr 25 00:24:54.138341 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 25 00:24:54.139533 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 25 00:24:54.198853 systemd[1]: Started sshd@1-10.0.0.3:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). Apr 25 00:24:54.229323 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.230407 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.233604 systemd-logind[1554]: New session 2 of user core. Apr 25 00:24:54.244654 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 25 00:24:54.295824 sshd[1707]: pam_unix(sshd:session): session closed for user core Apr 25 00:24:54.305664 systemd[1]: Started sshd@2-10.0.0.3:22-10.0.0.1:57902.service - OpenSSH per-connection server daemon (10.0.0.1:57902). Apr 25 00:24:54.306068 systemd[1]: sshd@1-10.0.0.3:22-10.0.0.1:57888.service: Deactivated successfully. Apr 25 00:24:54.307318 systemd[1]: session-2.scope: Deactivated successfully. Apr 25 00:24:54.308287 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Apr 25 00:24:54.309112 systemd-logind[1554]: Removed session 2. Apr 25 00:24:54.333489 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 57902 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.334487 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.337705 systemd-logind[1554]: New session 3 of user core. Apr 25 00:24:54.349724 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 25 00:24:54.398962 sshd[1712]: pam_unix(sshd:session): session closed for user core Apr 25 00:24:54.414690 systemd[1]: Started sshd@3-10.0.0.3:22-10.0.0.1:57918.service - OpenSSH per-connection server daemon (10.0.0.1:57918). Apr 25 00:24:54.415258 systemd[1]: sshd@2-10.0.0.3:22-10.0.0.1:57902.service: Deactivated successfully. Apr 25 00:24:54.416494 systemd[1]: session-3.scope: Deactivated successfully. Apr 25 00:24:54.417165 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Apr 25 00:24:54.418086 systemd-logind[1554]: Removed session 3. Apr 25 00:24:54.442860 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 57918 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.443874 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.447141 systemd-logind[1554]: New session 4 of user core. Apr 25 00:24:54.458686 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 25 00:24:54.511558 sshd[1720]: pam_unix(sshd:session): session closed for user core Apr 25 00:24:54.513500 systemd[1]: sshd@3-10.0.0.3:22-10.0.0.1:57918.service: Deactivated successfully. Apr 25 00:24:54.514935 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Apr 25 00:24:54.521668 systemd[1]: Started sshd@4-10.0.0.3:22-10.0.0.1:57922.service - OpenSSH per-connection server daemon (10.0.0.1:57922). Apr 25 00:24:54.521951 systemd[1]: session-4.scope: Deactivated successfully. Apr 25 00:24:54.522655 systemd-logind[1554]: Removed session 4. Apr 25 00:24:54.550306 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 57922 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.551327 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.554610 systemd-logind[1554]: New session 5 of user core. Apr 25 00:24:54.560681 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 25 00:24:54.616514 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 25 00:24:54.616735 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:24:54.629541 sudo[1735]: pam_unix(sudo:session): session closed for user root Apr 25 00:24:54.630982 sshd[1731]: pam_unix(sshd:session): session closed for user core Apr 25 00:24:54.642677 systemd[1]: Started sshd@5-10.0.0.3:22-10.0.0.1:57928.service - OpenSSH per-connection server daemon (10.0.0.1:57928). Apr 25 00:24:54.643000 systemd[1]: sshd@4-10.0.0.3:22-10.0.0.1:57922.service: Deactivated successfully. Apr 25 00:24:54.644977 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Apr 25 00:24:54.645390 systemd[1]: session-5.scope: Deactivated successfully. Apr 25 00:24:54.646213 systemd-logind[1554]: Removed session 5. Apr 25 00:24:54.670769 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 57928 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.671782 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.674980 systemd-logind[1554]: New session 6 of user core. Apr 25 00:24:54.684661 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 25 00:24:54.735097 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 25 00:24:54.735305 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:24:54.738199 sudo[1745]: pam_unix(sudo:session): session closed for user root Apr 25 00:24:54.742166 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 25 00:24:54.742368 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:24:54.757700 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 25 00:24:54.759110 auditctl[1748]: No rules Apr 25 00:24:54.759798 systemd[1]: audit-rules.service: Deactivated successfully. Apr 25 00:24:54.759972 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 25 00:24:54.761218 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 00:24:54.782196 augenrules[1767]: No rules Apr 25 00:24:54.783174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 00:24:54.784094 sudo[1744]: pam_unix(sudo:session): session closed for user root Apr 25 00:24:54.785287 sshd[1737]: pam_unix(sshd:session): session closed for user core Apr 25 00:24:54.801671 systemd[1]: Started sshd@6-10.0.0.3:22-10.0.0.1:57942.service - OpenSSH per-connection server daemon (10.0.0.1:57942). Apr 25 00:24:54.802066 systemd[1]: sshd@5-10.0.0.3:22-10.0.0.1:57928.service: Deactivated successfully. Apr 25 00:24:54.803238 systemd[1]: session-6.scope: Deactivated successfully. Apr 25 00:24:54.803707 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Apr 25 00:24:54.804642 systemd-logind[1554]: Removed session 6. Apr 25 00:24:54.829713 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 57942 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:24:54.830727 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:24:54.834067 systemd-logind[1554]: New session 7 of user core. Apr 25 00:24:54.844679 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 25 00:24:54.895688 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 25 00:24:54.895901 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 00:24:55.112663 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 25 00:24:55.112829 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 25 00:24:55.320195 dockerd[1798]: time="2026-04-25T00:24:55.320126544Z" level=info msg="Starting up" Apr 25 00:24:55.525295 dockerd[1798]: time="2026-04-25T00:24:55.525168618Z" level=info msg="Loading containers: start." Apr 25 00:24:55.611467 kernel: Initializing XFRM netlink socket Apr 25 00:24:55.678063 systemd-networkd[1252]: docker0: Link UP Apr 25 00:24:55.696828 dockerd[1798]: time="2026-04-25T00:24:55.696775342Z" level=info msg="Loading containers: done." Apr 25 00:24:55.712040 dockerd[1798]: time="2026-04-25T00:24:55.711983102Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 25 00:24:55.712154 dockerd[1798]: time="2026-04-25T00:24:55.712093452Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 25 00:24:55.712214 dockerd[1798]: time="2026-04-25T00:24:55.712184309Z" level=info msg="Daemon has completed initialization" Apr 25 00:24:55.743602 dockerd[1798]: time="2026-04-25T00:24:55.743529230Z" level=info msg="API listen on /run/docker.sock" Apr 25 00:24:55.744792 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 25 00:24:56.124881 containerd[1573]: time="2026-04-25T00:24:56.124839564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 25 00:24:56.589717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568408626.mount: Deactivated successfully. Apr 25 00:24:57.190389 containerd[1573]: time="2026-04-25T00:24:57.190317196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:57.191172 containerd[1573]: time="2026-04-25T00:24:57.191129156Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 25 00:24:57.192334 containerd[1573]: time="2026-04-25T00:24:57.192280771Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:57.194680 containerd[1573]: time="2026-04-25T00:24:57.194645426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:57.195305 containerd[1573]: time="2026-04-25T00:24:57.195270371Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.070393163s" Apr 25 00:24:57.195331 containerd[1573]: time="2026-04-25T00:24:57.195303876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 25 00:24:57.195886 containerd[1573]: time="2026-04-25T00:24:57.195852810Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 25 00:24:57.969696 containerd[1573]: time="2026-04-25T00:24:57.969641332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:57.970314 containerd[1573]: time="2026-04-25T00:24:57.970276986Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 25 00:24:57.971155 containerd[1573]: time="2026-04-25T00:24:57.971118656Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:57.974777 containerd[1573]: time="2026-04-25T00:24:57.974748541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:57.975494 containerd[1573]: time="2026-04-25T00:24:57.975469813Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 779.58798ms" Apr 25 00:24:57.975543 containerd[1573]: time="2026-04-25T00:24:57.975496992Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 25 00:24:57.975919 containerd[1573]: time="2026-04-25T00:24:57.975898583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 25 00:24:58.704235 containerd[1573]: time="2026-04-25T00:24:58.704192409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:58.704891 containerd[1573]: time="2026-04-25T00:24:58.704860390Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 25 00:24:58.705860 containerd[1573]: time="2026-04-25T00:24:58.705822959Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:58.707989 containerd[1573]: time="2026-04-25T00:24:58.707952242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:58.709686 containerd[1573]: time="2026-04-25T00:24:58.709655825Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 733.731862ms" Apr 25 00:24:58.709686 containerd[1573]: time="2026-04-25T00:24:58.709684181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 25 00:24:58.710148 containerd[1573]: time="2026-04-25T00:24:58.710121548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 25 00:24:59.477814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496982763.mount: Deactivated successfully. Apr 25 00:24:59.731532 containerd[1573]: time="2026-04-25T00:24:59.731408015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:59.732106 containerd[1573]: time="2026-04-25T00:24:59.732031044Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 25 00:24:59.732892 containerd[1573]: time="2026-04-25T00:24:59.732858921Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:59.734708 containerd[1573]: time="2026-04-25T00:24:59.734674045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:24:59.735063 containerd[1573]: time="2026-04-25T00:24:59.735027190Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.024884195s" Apr 25 00:24:59.735091 containerd[1573]: time="2026-04-25T00:24:59.735060657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 25 00:24:59.735550 containerd[1573]: time="2026-04-25T00:24:59.735534854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 25 00:25:00.088687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 25 00:25:00.095636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:25:00.191993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:25:00.195117 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 00:25:00.215102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136320824.mount: Deactivated successfully. Apr 25 00:25:00.230933 kubelet[2032]: E0425 00:25:00.230893 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 00:25:00.235089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 00:25:00.235231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 00:25:00.789479 containerd[1573]: time="2026-04-25T00:25:00.789412330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:00.790099 containerd[1573]: time="2026-04-25T00:25:00.790041895Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 25 00:25:00.790999 containerd[1573]: time="2026-04-25T00:25:00.790957684Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:00.793727 containerd[1573]: time="2026-04-25T00:25:00.793683979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:00.794645 containerd[1573]: time="2026-04-25T00:25:00.794598877Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.058994625s" Apr 25 00:25:00.794645 containerd[1573]: time="2026-04-25T00:25:00.794644531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 25 00:25:00.795357 containerd[1573]: time="2026-04-25T00:25:00.795325513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 25 00:25:01.143918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897810046.mount: Deactivated successfully. Apr 25 00:25:01.149073 containerd[1573]: time="2026-04-25T00:25:01.149021602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:01.149983 containerd[1573]: time="2026-04-25T00:25:01.149945200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 25 00:25:01.150998 containerd[1573]: time="2026-04-25T00:25:01.150937010Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:01.153004 containerd[1573]: time="2026-04-25T00:25:01.152955665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:01.153549 containerd[1573]: time="2026-04-25T00:25:01.153505600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 358.14813ms" Apr 25 00:25:01.153549 containerd[1573]: time="2026-04-25T00:25:01.153540714Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 25 00:25:01.154083 containerd[1573]: time="2026-04-25T00:25:01.154060084Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 25 00:25:01.555691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346844742.mount: Deactivated successfully. Apr 25 00:25:02.130561 containerd[1573]: time="2026-04-25T00:25:02.130507492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:02.131140 containerd[1573]: time="2026-04-25T00:25:02.131099087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 25 00:25:02.132365 containerd[1573]: time="2026-04-25T00:25:02.132329903Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:02.134660 containerd[1573]: time="2026-04-25T00:25:02.134610901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:02.135453 containerd[1573]: time="2026-04-25T00:25:02.135400181Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 981.311692ms" Apr 25 00:25:02.135490 containerd[1573]: time="2026-04-25T00:25:02.135456813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 25 00:25:04.261669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:25:04.271686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:25:04.289266 systemd[1]: Reloading requested from client PID 2192 ('systemctl') (unit session-7.scope)... Apr 25 00:25:04.289290 systemd[1]: Reloading... Apr 25 00:25:04.341474 zram_generator::config[2234]: No configuration found. Apr 25 00:25:04.412621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:25:04.455629 systemd[1]: Reloading finished in 166 ms. Apr 25 00:25:04.492334 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 25 00:25:04.492381 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 25 00:25:04.492601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:25:04.494391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:25:04.582407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:25:04.585470 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 00:25:04.615223 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:25:04.615223 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 25 00:25:04.615223 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:25:04.615545 kubelet[2292]: I0425 00:25:04.615273 2292 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 25 00:25:04.971638 kubelet[2292]: I0425 00:25:04.971537 2292 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 25 00:25:04.971638 kubelet[2292]: I0425 00:25:04.971568 2292 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 00:25:04.971775 kubelet[2292]: I0425 00:25:04.971760 2292 server.go:956] "Client rotation is on, will bootstrap in background" Apr 25 00:25:04.992616 kubelet[2292]: E0425 00:25:04.992580 2292 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 25 00:25:04.994873 kubelet[2292]: I0425 00:25:04.994840 2292 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 00:25:04.997617 kubelet[2292]: E0425 00:25:04.997588 2292 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 00:25:04.997617 kubelet[2292]: I0425 00:25:04.997618 2292 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 25 00:25:05.000602 kubelet[2292]: I0425 00:25:05.000589 2292 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 25 00:25:05.001299 kubelet[2292]: I0425 00:25:05.001261 2292 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 00:25:05.001453 kubelet[2292]: I0425 00:25:05.001294 2292 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 25 00:25:05.001453 kubelet[2292]: I0425 00:25:05.001426 2292 topology_manager.go:138] "Creating topology manager with none policy" Apr 25 00:25:05.001550 kubelet[2292]: I0425 00:25:05.001457 2292 container_manager_linux.go:303] "Creating device plugin manager" Apr 25 00:25:05.001550 kubelet[2292]: I0425 00:25:05.001549 2292 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:25:05.004865 kubelet[2292]: I0425 00:25:05.004839 2292 kubelet.go:480] "Attempting to sync node with API server" Apr 25 00:25:05.004888 kubelet[2292]: I0425 00:25:05.004875 2292 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 00:25:05.004910 kubelet[2292]: I0425 00:25:05.004899 2292 kubelet.go:386] "Adding apiserver pod source" Apr 25 00:25:05.004932 kubelet[2292]: I0425 00:25:05.004915 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 00:25:05.011086 kubelet[2292]: E0425 00:25:05.011048 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 25 00:25:05.011177 kubelet[2292]: I0425 00:25:05.011159 2292 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 00:25:05.011480 kubelet[2292]: E0425 00:25:05.011323 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 25 00:25:05.011787 kubelet[2292]: I0425 00:25:05.011751 2292 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 00:25:05.012321 kubelet[2292]: W0425 00:25:05.012295 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 25 00:25:05.015603 kubelet[2292]: I0425 00:25:05.015574 2292 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 25 00:25:05.015666 kubelet[2292]: I0425 00:25:05.015618 2292 server.go:1289] "Started kubelet" Apr 25 00:25:05.015820 kubelet[2292]: I0425 00:25:05.015731 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 00:25:05.016064 kubelet[2292]: I0425 00:25:05.016043 2292 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 00:25:05.016108 kubelet[2292]: I0425 00:25:05.016091 2292 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 00:25:05.016460 kubelet[2292]: I0425 00:25:05.016394 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 25 00:25:05.016727 kubelet[2292]: I0425 00:25:05.016704 2292 server.go:317] "Adding debug handlers to kubelet server" Apr 25 00:25:05.017360 kubelet[2292]: E0425 00:25:05.017302 2292 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 00:25:05.017593 kubelet[2292]: I0425 00:25:05.017570 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 00:25:05.019323 kubelet[2292]: E0425 00:25:05.019275 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 25 00:25:05.019367 kubelet[2292]: I0425 00:25:05.019359 2292 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 25 00:25:05.019520 kubelet[2292]: I0425 00:25:05.019501 2292 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 25 00:25:05.019568 kubelet[2292]: I0425 00:25:05.019547 2292 reconciler.go:26] "Reconciler: start to sync state" Apr 25 00:25:05.019818 kubelet[2292]: E0425 00:25:05.019789 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 25 00:25:05.020000 kubelet[2292]: E0425 00:25:05.019976 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="200ms" Apr 25 00:25:05.020103 kubelet[2292]: I0425 00:25:05.020088 2292 factory.go:223] Registration of the systemd container factory successfully Apr 25 00:25:05.020170 kubelet[2292]: I0425 00:25:05.020155 2292 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 00:25:05.020791 kubelet[2292]: I0425 00:25:05.020775 2292 factory.go:223] Registration of the containerd container factory successfully Apr 25 00:25:05.022508 kubelet[2292]: E0425 00:25:05.021164 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.3:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a971d90d13763a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-25 00:25:05.015592506 +0000 UTC m=+0.426966721,LastTimestamp:2026-04-25 00:25:05.015592506 +0000 UTC m=+0.426966721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 25 00:25:05.032373 kubelet[2292]: I0425 00:25:05.032327 2292 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 25 00:25:05.033277 kubelet[2292]: I0425 00:25:05.033242 2292 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 25 00:25:05.033277 kubelet[2292]: I0425 00:25:05.033277 2292 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 25 00:25:05.033330 kubelet[2292]: I0425 00:25:05.033292 2292 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 00:25:05.033330 kubelet[2292]: I0425 00:25:05.033297 2292 kubelet.go:2436] "Starting kubelet main sync loop" Apr 25 00:25:05.033330 kubelet[2292]: E0425 00:25:05.033322 2292 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 00:25:05.033726 kubelet[2292]: E0425 00:25:05.033636 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 25 00:25:05.034149 kubelet[2292]: I0425 00:25:05.034134 2292 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 25 00:25:05.034173 kubelet[2292]: I0425 00:25:05.034150 2292 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 25 00:25:05.034173 kubelet[2292]: I0425 00:25:05.034161 2292 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:25:05.080553 kubelet[2292]: I0425 00:25:05.080501 2292 policy_none.go:49] "None policy: Start" Apr 25 00:25:05.080553 kubelet[2292]: I0425 00:25:05.080536 2292 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 25 00:25:05.080553 kubelet[2292]: I0425 00:25:05.080549 2292 state_mem.go:35] "Initializing new in-memory state store" Apr 25 00:25:05.086624 kubelet[2292]: E0425 00:25:05.085146 2292 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 00:25:05.086624 kubelet[2292]: I0425 00:25:05.085374 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 25 00:25:05.086624 kubelet[2292]: I0425 00:25:05.085398 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 00:25:05.086738 kubelet[2292]: I0425 00:25:05.086703 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 25 00:25:05.089187 kubelet[2292]: E0425 00:25:05.088075 2292 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 00:25:05.089187 kubelet[2292]: E0425 00:25:05.088107 2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 25 00:25:05.139984 kubelet[2292]: E0425 00:25:05.139932 2292 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:25:05.142370 kubelet[2292]: E0425 00:25:05.142327 2292 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:25:05.144866 kubelet[2292]: E0425 00:25:05.144843 2292 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 25 00:25:05.186686 kubelet[2292]: I0425 00:25:05.186611 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 25 00:25:05.186973 kubelet[2292]: E0425 00:25:05.186947 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Apr 25 00:25:05.220636 kubelet[2292]: I0425 00:25:05.220601 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:05.220636 kubelet[2292]: I0425 00:25:05.220638 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b533799b58a76486de90e1afe7a578-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65b533799b58a76486de90e1afe7a578\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:05.220636 kubelet[2292]: I0425 00:25:05.220676 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:05.220839 kubelet[2292]: I0425 00:25:05.220694 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:05.220839 kubelet[2292]: I0425 00:25:05.220768 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:05.220839 kubelet[2292]: I0425 00:25:05.220800 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:05.220839 kubelet[2292]: E0425 00:25:05.220801 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="400ms" Apr 25 00:25:05.220839 kubelet[2292]: I0425 00:25:05.220815 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b533799b58a76486de90e1afe7a578-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65b533799b58a76486de90e1afe7a578\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:05.220944 kubelet[2292]: I0425 00:25:05.220853 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b533799b58a76486de90e1afe7a578-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65b533799b58a76486de90e1afe7a578\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:05.220944 kubelet[2292]: I0425 00:25:05.220867 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:05.388354 kubelet[2292]: I0425 00:25:05.388251 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 25 00:25:05.388645 kubelet[2292]: E0425 00:25:05.388610 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Apr 25 00:25:05.441104 kubelet[2292]: E0425 00:25:05.441012 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:05.442002 containerd[1573]: time="2026-04-25T00:25:05.441949973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65b533799b58a76486de90e1afe7a578,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:05.443502 kubelet[2292]: E0425 00:25:05.443014 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:05.443549 containerd[1573]: time="2026-04-25T00:25:05.443320035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:05.445230 kubelet[2292]: E0425 00:25:05.445202 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:05.445572 containerd[1573]: time="2026-04-25T00:25:05.445516720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:05.621892 kubelet[2292]: E0425 00:25:05.621840 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.3:6443: connect: connection refused" interval="800ms" Apr 25 00:25:05.790738 kubelet[2292]: I0425 00:25:05.790608 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 25 00:25:05.790963 kubelet[2292]: E0425 00:25:05.790933 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.3:6443/api/v1/nodes\": dial tcp 10.0.0.3:6443: connect: connection refused" node="localhost" Apr 25 00:25:05.851558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506923046.mount: Deactivated successfully. Apr 25 00:25:05.859178 containerd[1573]: time="2026-04-25T00:25:05.859127311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:25:05.860755 containerd[1573]: time="2026-04-25T00:25:05.860709680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 00:25:05.861678 containerd[1573]: time="2026-04-25T00:25:05.861636418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:25:05.862488 containerd[1573]: time="2026-04-25T00:25:05.862451437Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:25:05.863419 containerd[1573]: time="2026-04-25T00:25:05.863388580Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:25:05.864259 containerd[1573]: time="2026-04-25T00:25:05.864232703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 00:25:05.865300 containerd[1573]: time="2026-04-25T00:25:05.865267271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 25 00:25:05.866764 containerd[1573]: time="2026-04-25T00:25:05.866723539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 00:25:05.867724 containerd[1573]: time="2026-04-25T00:25:05.867699745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 424.33668ms" Apr 25 00:25:05.868179 containerd[1573]: time="2026-04-25T00:25:05.868157503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 426.117061ms" Apr 25 00:25:05.870848 containerd[1573]: time="2026-04-25T00:25:05.870801387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.238113ms" Apr 25 00:25:05.948360 containerd[1573]: time="2026-04-25T00:25:05.948261450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:05.949205 containerd[1573]: time="2026-04-25T00:25:05.948839613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:05.949205 containerd[1573]: time="2026-04-25T00:25:05.948926138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:05.949205 containerd[1573]: time="2026-04-25T00:25:05.949135406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:05.950280 containerd[1573]: time="2026-04-25T00:25:05.950210890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:05.950280 containerd[1573]: time="2026-04-25T00:25:05.950254699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:05.950340 containerd[1573]: time="2026-04-25T00:25:05.950267505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:05.950374 containerd[1573]: time="2026-04-25T00:25:05.950334004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:05.951573 containerd[1573]: time="2026-04-25T00:25:05.951522023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:05.951629 containerd[1573]: time="2026-04-25T00:25:05.951565926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:05.951629 containerd[1573]: time="2026-04-25T00:25:05.951574179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:05.951688 containerd[1573]: time="2026-04-25T00:25:05.951627194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:05.998567 containerd[1573]: time="2026-04-25T00:25:05.998540313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"228778ec89b2bf8bf9288f0c69e1f17b4c09b6b6586518484b13200e7bb825c8\"" Apr 25 00:25:05.999617 kubelet[2292]: E0425 00:25:05.999597 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:06.004080 containerd[1573]: time="2026-04-25T00:25:06.003552316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65b533799b58a76486de90e1afe7a578,Namespace:kube-system,Attempt:0,} returns sandbox id \"b538012e46ca3903b958ebe9017f6782833530dfe7e2657a611f41eac94d8baf\"" Apr 25 00:25:06.004262 kubelet[2292]: E0425 00:25:06.004212 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:06.005487 containerd[1573]: time="2026-04-25T00:25:06.005462024Z" level=info msg="CreateContainer within sandbox \"228778ec89b2bf8bf9288f0c69e1f17b4c09b6b6586518484b13200e7bb825c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 25 00:25:06.006715 containerd[1573]: time="2026-04-25T00:25:06.006684854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb17f669133200ce3d162dcbb486f2aa5e01a3d4e7764003037ab634a0ede54\"" Apr 25 00:25:06.007131 kubelet[2292]: E0425 00:25:06.007119 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:06.008981 containerd[1573]: time="2026-04-25T00:25:06.008952446Z" level=info msg="CreateContainer within sandbox \"b538012e46ca3903b958ebe9017f6782833530dfe7e2657a611f41eac94d8baf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 25 00:25:06.011763 containerd[1573]: time="2026-04-25T00:25:06.011738470Z" level=info msg="CreateContainer within sandbox \"3eb17f669133200ce3d162dcbb486f2aa5e01a3d4e7764003037ab634a0ede54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 25 00:25:06.026020 containerd[1573]: time="2026-04-25T00:25:06.025992594Z" level=info msg="CreateContainer within sandbox \"228778ec89b2bf8bf9288f0c69e1f17b4c09b6b6586518484b13200e7bb825c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"810ead3fc1d7b34cddab13691d1c4002773ef3d102248ee8624cd6921eb5f892\"" Apr 25 00:25:06.026584 containerd[1573]: time="2026-04-25T00:25:06.026561694Z" level=info msg="StartContainer for \"810ead3fc1d7b34cddab13691d1c4002773ef3d102248ee8624cd6921eb5f892\"" Apr 25 00:25:06.030687 containerd[1573]: time="2026-04-25T00:25:06.030638934Z" level=info msg="CreateContainer within sandbox \"b538012e46ca3903b958ebe9017f6782833530dfe7e2657a611f41eac94d8baf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1d9b6bc68f0bda8988d2e76ca7590907268712c41d92e8a3fc80c5334b6816ad\"" Apr 25 00:25:06.031022 containerd[1573]: time="2026-04-25T00:25:06.031001584Z" level=info msg="StartContainer for \"1d9b6bc68f0bda8988d2e76ca7590907268712c41d92e8a3fc80c5334b6816ad\"" Apr 25 00:25:06.034758 containerd[1573]: time="2026-04-25T00:25:06.034738270Z" level=info msg="CreateContainer within sandbox \"3eb17f669133200ce3d162dcbb486f2aa5e01a3d4e7764003037ab634a0ede54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6eb7c36f59e9ccecdee0c34455020aac1eadf163e34db174379e5651f3da2a17\"" Apr 25 00:25:06.035125 containerd[1573]: time="2026-04-25T00:25:06.035103788Z" level=info msg="StartContainer for \"6eb7c36f59e9ccecdee0c34455020aac1eadf163e34db174379e5651f3da2a17\"" Apr 25 00:25:06.091612 containerd[1573]: time="2026-04-25T00:25:06.091545708Z" level=info msg="StartContainer for \"1d9b6bc68f0bda8988d2e76ca7590907268712c41d92e8a3fc80c5334b6816ad\" returns successfully" Apr 25 00:25:06.093712 containerd[1573]: time="2026-04-25T00:25:06.092611476Z" level=info msg="StartContainer for \"6eb7c36f59e9ccecdee0c34455020aac1eadf163e34db174379e5651f3da2a17\" returns successfully" Apr 25 00:25:06.093712 containerd[1573]: time="2026-04-25T00:25:06.092971298Z" level=info msg="StartContainer for \"810ead3fc1d7b34cddab13691d1c4002773ef3d102248ee8624cd6921eb5f892\" returns successfully" Apr 25 00:25:06.094290 kubelet[2292]: E0425 00:25:06.094209 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 25 00:25:06.593402 kubelet[2292]: I0425 00:25:06.593362 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 25 00:25:06.787681 kubelet[2292]: E0425 00:25:06.787633 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 25 00:25:06.879573 kubelet[2292]: I0425 00:25:06.877341 2292 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 25 00:25:06.879573 kubelet[2292]: E0425 00:25:06.877371 2292 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 25 00:25:06.920616 kubelet[2292]: I0425 00:25:06.920566 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:06.925090 kubelet[2292]: E0425 00:25:06.925053 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:06.925090 kubelet[2292]: I0425 00:25:06.925079 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:06.926838 kubelet[2292]: E0425 00:25:06.926702 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:06.926838 kubelet[2292]: I0425 00:25:06.926743 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:06.927814 kubelet[2292]: E0425 00:25:06.927769 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:07.005523 kubelet[2292]: I0425 00:25:07.005494 2292 apiserver.go:52] "Watching apiserver" Apr 25 00:25:07.020238 kubelet[2292]: I0425 00:25:07.020180 2292 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 25 00:25:07.048368 kubelet[2292]: I0425 00:25:07.048138 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:07.050041 kubelet[2292]: I0425 00:25:07.048963 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:07.050118 kubelet[2292]: I0425 00:25:07.050089 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:07.050145 kubelet[2292]: E0425 00:25:07.050115 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:07.050391 kubelet[2292]: E0425 00:25:07.050370 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:07.050504 kubelet[2292]: E0425 00:25:07.050485 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:07.050595 kubelet[2292]: E0425 00:25:07.050578 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:07.051441 kubelet[2292]: E0425 00:25:07.051416 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:07.051566 kubelet[2292]: E0425 00:25:07.051536 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:08.052090 kubelet[2292]: I0425 00:25:08.051952 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:08.052487 kubelet[2292]: I0425 00:25:08.052157 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:08.052613 kubelet[2292]: I0425 00:25:08.052588 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:08.060615 kubelet[2292]: E0425 00:25:08.060570 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:08.060615 kubelet[2292]: E0425 00:25:08.060621 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:08.061001 kubelet[2292]: E0425 00:25:08.060722 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:09.023991 systemd[1]: Reloading requested from client PID 2577 ('systemctl') (unit session-7.scope)... Apr 25 00:25:09.024008 systemd[1]: Reloading... Apr 25 00:25:09.053867 kubelet[2292]: I0425 00:25:09.053840 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.054177 kubelet[2292]: I0425 00:25:09.053906 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:09.054177 kubelet[2292]: I0425 00:25:09.054090 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.059867 kubelet[2292]: E0425 00:25:09.059207 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.059867 kubelet[2292]: E0425 00:25:09.059313 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:09.059993 kubelet[2292]: E0425 00:25:09.059904 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:09.060017 kubelet[2292]: E0425 00:25:09.060005 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:09.060142 kubelet[2292]: E0425 00:25:09.060080 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.060254 kubelet[2292]: E0425 00:25:09.060231 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:09.074481 zram_generator::config[2616]: No configuration found. Apr 25 00:25:09.148820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 00:25:09.196367 systemd[1]: Reloading finished in 172 ms. Apr 25 00:25:09.224053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:25:09.248123 systemd[1]: kubelet.service: Deactivated successfully. Apr 25 00:25:09.248387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:25:09.253857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 00:25:09.342671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 00:25:09.346121 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 00:25:09.376262 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:25:09.376262 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 25 00:25:09.376262 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 00:25:09.376718 kubelet[2671]: I0425 00:25:09.376278 2671 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 25 00:25:09.382366 kubelet[2671]: I0425 00:25:09.382325 2671 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 25 00:25:09.382366 kubelet[2671]: I0425 00:25:09.382354 2671 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 00:25:09.382557 kubelet[2671]: I0425 00:25:09.382542 2671 server.go:956] "Client rotation is on, will bootstrap in background" Apr 25 00:25:09.383483 kubelet[2671]: I0425 00:25:09.383468 2671 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 25 00:25:09.385084 kubelet[2671]: I0425 00:25:09.385052 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 00:25:09.388117 kubelet[2671]: E0425 00:25:09.388055 2671 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 00:25:09.388117 kubelet[2671]: I0425 00:25:09.388084 2671 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 25 00:25:09.392236 kubelet[2671]: I0425 00:25:09.392185 2671 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 25 00:25:09.392771 kubelet[2671]: I0425 00:25:09.392727 2671 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 00:25:09.392936 kubelet[2671]: I0425 00:25:09.392755 2671 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 25 00:25:09.392936 kubelet[2671]: I0425 00:25:09.392934 2671 topology_manager.go:138] "Creating topology manager with none policy" Apr 25 00:25:09.393087 kubelet[2671]: I0425 00:25:09.392946 2671 container_manager_linux.go:303] "Creating device plugin manager" Apr 25 00:25:09.393087 kubelet[2671]: I0425 00:25:09.392993 2671 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:25:09.393198 kubelet[2671]: I0425 00:25:09.393180 2671 kubelet.go:480] "Attempting to sync node with API server" Apr 25 00:25:09.393198 kubelet[2671]: I0425 00:25:09.393194 2671 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 00:25:09.393277 kubelet[2671]: I0425 00:25:09.393221 2671 kubelet.go:386] "Adding apiserver pod source" Apr 25 00:25:09.393277 kubelet[2671]: I0425 00:25:09.393232 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 00:25:09.394237 kubelet[2671]: I0425 00:25:09.394144 2671 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 00:25:09.396849 kubelet[2671]: I0425 00:25:09.394919 2671 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 00:25:09.399966 kubelet[2671]: I0425 00:25:09.399947 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 25 00:25:09.400103 kubelet[2671]: I0425 00:25:09.400088 2671 server.go:1289] "Started kubelet" Apr 25 00:25:09.400880 kubelet[2671]: I0425 00:25:09.400287 2671 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 00:25:09.402170 kubelet[2671]: I0425 00:25:09.401630 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 00:25:09.402170 kubelet[2671]: I0425 00:25:09.401873 2671 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 00:25:09.402619 kubelet[2671]: I0425 00:25:09.402592 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 25 00:25:09.403941 kubelet[2671]: I0425 00:25:09.403460 2671 server.go:317] "Adding debug handlers to kubelet server" Apr 25 00:25:09.405811 kubelet[2671]: I0425 00:25:09.405781 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 00:25:09.407502 kubelet[2671]: I0425 00:25:09.406180 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 25 00:25:09.408939 kubelet[2671]: I0425 00:25:09.408915 2671 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 25 00:25:09.409355 kubelet[2671]: I0425 00:25:09.409313 2671 reconciler.go:26] "Reconciler: start to sync state" Apr 25 00:25:09.410340 kubelet[2671]: I0425 00:25:09.410308 2671 factory.go:223] Registration of the systemd container factory successfully Apr 25 00:25:09.410522 kubelet[2671]: I0425 00:25:09.410493 2671 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 00:25:09.412226 kubelet[2671]: I0425 00:25:09.412175 2671 factory.go:223] Registration of the containerd container factory successfully Apr 25 00:25:09.412966 kubelet[2671]: E0425 00:25:09.412579 2671 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 00:25:09.426784 kubelet[2671]: I0425 00:25:09.426730 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 25 00:25:09.428156 kubelet[2671]: I0425 00:25:09.427858 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 25 00:25:09.428156 kubelet[2671]: I0425 00:25:09.427875 2671 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 25 00:25:09.428156 kubelet[2671]: I0425 00:25:09.427893 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 00:25:09.428156 kubelet[2671]: I0425 00:25:09.427900 2671 kubelet.go:2436] "Starting kubelet main sync loop" Apr 25 00:25:09.428156 kubelet[2671]: E0425 00:25:09.427966 2671 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 00:25:09.448800 kubelet[2671]: I0425 00:25:09.448775 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 25 00:25:09.448800 kubelet[2671]: I0425 00:25:09.448791 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 25 00:25:09.448800 kubelet[2671]: I0425 00:25:09.448804 2671 state_mem.go:36] "Initialized new in-memory state store" Apr 25 00:25:09.448930 kubelet[2671]: I0425 00:25:09.448890 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 25 00:25:09.448930 kubelet[2671]: I0425 00:25:09.448896 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 25 00:25:09.448930 kubelet[2671]: I0425 00:25:09.448910 2671 policy_none.go:49] "None policy: Start" Apr 25 00:25:09.448930 kubelet[2671]: I0425 00:25:09.448920 2671 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 25 00:25:09.448930 kubelet[2671]: I0425 00:25:09.448927 2671 state_mem.go:35] "Initializing new in-memory state store" Apr 25 00:25:09.449000 kubelet[2671]: I0425 00:25:09.448983 2671 state_mem.go:75] "Updated machine memory state" Apr 25 00:25:09.451476 kubelet[2671]: E0425 00:25:09.450044 2671 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 00:25:09.451476 kubelet[2671]: I0425 00:25:09.450162 2671 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 25 00:25:09.451476 kubelet[2671]: I0425 00:25:09.450169 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 00:25:09.451476 kubelet[2671]: I0425 00:25:09.450797 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 25 00:25:09.453449 kubelet[2671]: E0425 00:25:09.452890 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 00:25:09.529580 kubelet[2671]: I0425 00:25:09.529514 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:09.529875 kubelet[2671]: I0425 00:25:09.529780 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.529875 kubelet[2671]: I0425 00:25:09.529645 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.536173 kubelet[2671]: E0425 00:25:09.536105 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:09.536539 kubelet[2671]: E0425 00:25:09.536514 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.536539 kubelet[2671]: E0425 00:25:09.536538 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.558013 kubelet[2671]: I0425 00:25:09.557994 2671 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 25 00:25:09.563163 kubelet[2671]: I0425 00:25:09.563127 2671 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 25 00:25:09.563224 kubelet[2671]: I0425 00:25:09.563201 2671 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 25 00:25:09.611108 kubelet[2671]: I0425 00:25:09.610360 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.611108 kubelet[2671]: I0425 00:25:09.610409 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65b533799b58a76486de90e1afe7a578-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65b533799b58a76486de90e1afe7a578\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.611108 kubelet[2671]: I0425 00:25:09.610456 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65b533799b58a76486de90e1afe7a578-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65b533799b58a76486de90e1afe7a578\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.611108 kubelet[2671]: I0425 00:25:09.610516 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65b533799b58a76486de90e1afe7a578-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65b533799b58a76486de90e1afe7a578\") " pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:09.611108 kubelet[2671]: I0425 00:25:09.610539 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.611299 kubelet[2671]: I0425 00:25:09.610559 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.611299 kubelet[2671]: I0425 00:25:09.610579 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.611299 kubelet[2671]: I0425 00:25:09.610598 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:09.611299 kubelet[2671]: I0425 00:25:09.610617 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 25 00:25:09.836735 kubelet[2671]: E0425 00:25:09.836617 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:09.836862 kubelet[2671]: E0425 00:25:09.836744 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:09.836862 kubelet[2671]: E0425 00:25:09.836856 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:10.021014 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 25 00:25:10.021244 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 25 00:25:10.394957 kubelet[2671]: I0425 00:25:10.394809 2671 apiserver.go:52] "Watching apiserver" Apr 25 00:25:10.409229 kubelet[2671]: I0425 00:25:10.409179 2671 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 25 00:25:10.438795 kubelet[2671]: I0425 00:25:10.438763 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:10.439084 kubelet[2671]: I0425 00:25:10.439043 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:10.439833 kubelet[2671]: E0425 00:25:10.439771 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:10.445595 kubelet[2671]: E0425 00:25:10.445561 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 25 00:25:10.445675 kubelet[2671]: E0425 00:25:10.445665 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:10.447076 kubelet[2671]: E0425 00:25:10.447046 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 25 00:25:10.447152 kubelet[2671]: E0425 00:25:10.447122 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:10.456501 kubelet[2671]: I0425 00:25:10.456465 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.456419728 podStartE2EDuration="2.456419728s" podCreationTimestamp="2026-04-25 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:25:10.456057815 +0000 UTC m=+1.106577874" watchObservedRunningTime="2026-04-25 00:25:10.456419728 +0000 UTC m=+1.106939787" Apr 25 00:25:10.469533 kubelet[2671]: I0425 00:25:10.469329 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.469315321 podStartE2EDuration="2.469315321s" podCreationTimestamp="2026-04-25 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:25:10.463116665 +0000 UTC m=+1.113636723" watchObservedRunningTime="2026-04-25 00:25:10.469315321 +0000 UTC m=+1.119835405" Apr 25 00:25:10.469533 kubelet[2671]: I0425 00:25:10.469392 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.46938892 podStartE2EDuration="2.46938892s" podCreationTimestamp="2026-04-25 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:25:10.468784692 +0000 UTC m=+1.119304753" watchObservedRunningTime="2026-04-25 00:25:10.46938892 +0000 UTC m=+1.119908979" Apr 25 00:25:10.471337 sudo[2710]: pam_unix(sudo:session): session closed for user root Apr 25 00:25:11.440464 kubelet[2671]: E0425 00:25:11.440015 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:11.440464 kubelet[2671]: E0425 00:25:11.440079 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:11.440464 kubelet[2671]: E0425 00:25:11.440201 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:11.630759 sudo[1780]: pam_unix(sudo:session): session closed for user root Apr 25 00:25:11.632306 sshd[1773]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:11.635000 systemd[1]: sshd@6-10.0.0.3:22-10.0.0.1:57942.service: Deactivated successfully. Apr 25 00:25:11.636595 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Apr 25 00:25:11.636661 systemd[1]: session-7.scope: Deactivated successfully. Apr 25 00:25:11.637667 systemd-logind[1554]: Removed session 7. Apr 25 00:25:12.443534 kubelet[2671]: E0425 00:25:12.443480 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:13.962015 kubelet[2671]: I0425 00:25:13.961881 2671 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 25 00:25:13.962384 containerd[1573]: time="2026-04-25T00:25:13.962279801Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 25 00:25:13.962579 kubelet[2671]: I0425 00:25:13.962494 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 25 00:25:14.943217 kubelet[2671]: I0425 00:25:14.943177 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/185a50f1-33a4-4781-a175-86ef099d0423-kube-proxy\") pod \"kube-proxy-zl5tn\" (UID: \"185a50f1-33a4-4781-a175-86ef099d0423\") " pod="kube-system/kube-proxy-zl5tn" Apr 25 00:25:14.943217 kubelet[2671]: I0425 00:25:14.943212 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-run\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943217 kubelet[2671]: I0425 00:25:14.943226 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-hostproc\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943217 kubelet[2671]: I0425 00:25:14.943236 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-etc-cni-netd\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943484 kubelet[2671]: I0425 00:25:14.943246 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-lib-modules\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943484 kubelet[2671]: I0425 00:25:14.943257 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32605613-06bd-4153-a665-f07a955ada75-clustermesh-secrets\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943484 kubelet[2671]: I0425 00:25:14.943292 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/185a50f1-33a4-4781-a175-86ef099d0423-xtables-lock\") pod \"kube-proxy-zl5tn\" (UID: \"185a50f1-33a4-4781-a175-86ef099d0423\") " pod="kube-system/kube-proxy-zl5tn" Apr 25 00:25:14.943484 kubelet[2671]: I0425 00:25:14.943318 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/185a50f1-33a4-4781-a175-86ef099d0423-lib-modules\") pod \"kube-proxy-zl5tn\" (UID: \"185a50f1-33a4-4781-a175-86ef099d0423\") " pod="kube-system/kube-proxy-zl5tn" Apr 25 00:25:14.943484 kubelet[2671]: I0425 00:25:14.943330 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cni-path\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943484 kubelet[2671]: I0425 00:25:14.943343 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-kernel\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943596 kubelet[2671]: I0425 00:25:14.943354 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-hubble-tls\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943596 kubelet[2671]: I0425 00:25:14.943364 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsstx\" (UniqueName: \"kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-kube-api-access-dsstx\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943596 kubelet[2671]: I0425 00:25:14.943377 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-cgroup\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943596 kubelet[2671]: I0425 00:25:14.943389 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-net\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943596 kubelet[2671]: I0425 00:25:14.943415 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j68fm\" (UniqueName: \"kubernetes.io/projected/185a50f1-33a4-4781-a175-86ef099d0423-kube-api-access-j68fm\") pod \"kube-proxy-zl5tn\" (UID: \"185a50f1-33a4-4781-a175-86ef099d0423\") " pod="kube-system/kube-proxy-zl5tn" Apr 25 00:25:14.943678 kubelet[2671]: I0425 00:25:14.943486 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-bpf-maps\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943678 kubelet[2671]: I0425 00:25:14.943514 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-xtables-lock\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:14.943678 kubelet[2671]: I0425 00:25:14.943531 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32605613-06bd-4153-a665-f07a955ada75-cilium-config-path\") pod \"cilium-r5ps5\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " pod="kube-system/cilium-r5ps5" Apr 25 00:25:15.043967 kubelet[2671]: I0425 00:25:15.043857 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5827493-c556-401e-8cf3-0a139df33bc9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2sfmf\" (UID: \"e5827493-c556-401e-8cf3-0a139df33bc9\") " pod="kube-system/cilium-operator-6c4d7847fc-2sfmf" Apr 25 00:25:15.044280 kubelet[2671]: I0425 00:25:15.044071 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgz9\" (UniqueName: \"kubernetes.io/projected/e5827493-c556-401e-8cf3-0a139df33bc9-kube-api-access-gmgz9\") pod \"cilium-operator-6c4d7847fc-2sfmf\" (UID: \"e5827493-c556-401e-8cf3-0a139df33bc9\") " pod="kube-system/cilium-operator-6c4d7847fc-2sfmf" Apr 25 00:25:15.196905 kubelet[2671]: E0425 00:25:15.196813 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.197180 containerd[1573]: time="2026-04-25T00:25:15.197138017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zl5tn,Uid:185a50f1-33a4-4781-a175-86ef099d0423,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:15.199056 kubelet[2671]: E0425 00:25:15.199028 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.199547 containerd[1573]: time="2026-04-25T00:25:15.199526944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5ps5,Uid:32605613-06bd-4153-a665-f07a955ada75,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:15.221934 containerd[1573]: time="2026-04-25T00:25:15.221693224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:15.221934 containerd[1573]: time="2026-04-25T00:25:15.221751117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:15.221934 containerd[1573]: time="2026-04-25T00:25:15.221763753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:15.223559 containerd[1573]: time="2026-04-25T00:25:15.221931392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:15.225182 containerd[1573]: time="2026-04-25T00:25:15.225132470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:15.225227 containerd[1573]: time="2026-04-25T00:25:15.225206549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:15.225272 containerd[1573]: time="2026-04-25T00:25:15.225226526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:15.225413 containerd[1573]: time="2026-04-25T00:25:15.225362765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:15.248224 containerd[1573]: time="2026-04-25T00:25:15.247935248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5ps5,Uid:32605613-06bd-4153-a665-f07a955ada75,Namespace:kube-system,Attempt:0,} returns sandbox id \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\"" Apr 25 00:25:15.248475 kubelet[2671]: E0425 00:25:15.248455 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.250902 containerd[1573]: time="2026-04-25T00:25:15.250869898Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 25 00:25:15.252539 containerd[1573]: time="2026-04-25T00:25:15.252496405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zl5tn,Uid:185a50f1-33a4-4781-a175-86ef099d0423,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c07d8074b967ea38f0d308e6754c6cdf7f40e0e1194794bcac8f24d984e08e\"" Apr 25 00:25:15.253491 kubelet[2671]: E0425 00:25:15.253466 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.258020 containerd[1573]: time="2026-04-25T00:25:15.257895694Z" level=info msg="CreateContainer within sandbox \"78c07d8074b967ea38f0d308e6754c6cdf7f40e0e1194794bcac8f24d984e08e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 25 00:25:15.271368 containerd[1573]: time="2026-04-25T00:25:15.271349885Z" level=info msg="CreateContainer within sandbox \"78c07d8074b967ea38f0d308e6754c6cdf7f40e0e1194794bcac8f24d984e08e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3beb939ebb5df5d3762044f340f4c2c054dd8071679363aa370503e2c919929\"" Apr 25 00:25:15.272148 containerd[1573]: time="2026-04-25T00:25:15.272128231Z" level=info msg="StartContainer for \"a3beb939ebb5df5d3762044f340f4c2c054dd8071679363aa370503e2c919929\"" Apr 25 00:25:15.279858 kubelet[2671]: E0425 00:25:15.279843 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.280517 containerd[1573]: time="2026-04-25T00:25:15.280495212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2sfmf,Uid:e5827493-c556-401e-8cf3-0a139df33bc9,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:15.308600 containerd[1573]: time="2026-04-25T00:25:15.308533212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:15.308600 containerd[1573]: time="2026-04-25T00:25:15.308590467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:15.308600 containerd[1573]: time="2026-04-25T00:25:15.308601773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:15.308759 containerd[1573]: time="2026-04-25T00:25:15.308659791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:15.313401 containerd[1573]: time="2026-04-25T00:25:15.313352251Z" level=info msg="StartContainer for \"a3beb939ebb5df5d3762044f340f4c2c054dd8071679363aa370503e2c919929\" returns successfully" Apr 25 00:25:15.353789 containerd[1573]: time="2026-04-25T00:25:15.353716179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2sfmf,Uid:e5827493-c556-401e-8cf3-0a139df33bc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994\"" Apr 25 00:25:15.354493 kubelet[2671]: E0425 00:25:15.354325 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.448078 kubelet[2671]: E0425 00:25:15.447963 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:15.456239 kubelet[2671]: I0425 00:25:15.456197 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zl5tn" podStartSLOduration=1.4561843190000001 podStartE2EDuration="1.456184319s" podCreationTimestamp="2026-04-25 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:25:15.455582772 +0000 UTC m=+6.106102831" watchObservedRunningTime="2026-04-25 00:25:15.456184319 +0000 UTC m=+6.106704378" Apr 25 00:25:18.108478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745904193.mount: Deactivated successfully. Apr 25 00:25:18.826386 kubelet[2671]: E0425 00:25:18.826050 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:19.246539 containerd[1573]: time="2026-04-25T00:25:19.246400918Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:19.247252 containerd[1573]: time="2026-04-25T00:25:19.247210697Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 25 00:25:19.248087 containerd[1573]: time="2026-04-25T00:25:19.248050080Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:19.249353 containerd[1573]: time="2026-04-25T00:25:19.249331314Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 3.997873267s" Apr 25 00:25:19.249384 containerd[1573]: time="2026-04-25T00:25:19.249359672Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 25 00:25:19.252995 containerd[1573]: time="2026-04-25T00:25:19.252971075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 25 00:25:19.256878 containerd[1573]: time="2026-04-25T00:25:19.256846599Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 25 00:25:19.266134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381420096.mount: Deactivated successfully. Apr 25 00:25:19.267865 containerd[1573]: time="2026-04-25T00:25:19.267826248Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04\"" Apr 25 00:25:19.268454 containerd[1573]: time="2026-04-25T00:25:19.268117262Z" level=info msg="StartContainer for \"4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04\"" Apr 25 00:25:19.308894 containerd[1573]: time="2026-04-25T00:25:19.308844906Z" level=info msg="StartContainer for \"4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04\" returns successfully" Apr 25 00:25:19.391044 containerd[1573]: time="2026-04-25T00:25:19.390958526Z" level=info msg="shim disconnected" id=4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04 namespace=k8s.io Apr 25 00:25:19.391044 containerd[1573]: time="2026-04-25T00:25:19.391006755Z" level=warning msg="cleaning up after shim disconnected" id=4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04 namespace=k8s.io Apr 25 00:25:19.391044 containerd[1573]: time="2026-04-25T00:25:19.391013641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:25:19.457137 kubelet[2671]: E0425 00:25:19.457089 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:19.457638 kubelet[2671]: E0425 00:25:19.457511 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:19.462051 containerd[1573]: time="2026-04-25T00:25:19.461927914Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 25 00:25:19.487457 containerd[1573]: time="2026-04-25T00:25:19.486917074Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178\"" Apr 25 00:25:19.495452 containerd[1573]: time="2026-04-25T00:25:19.494608604Z" level=info msg="StartContainer for \"fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178\"" Apr 25 00:25:19.548160 containerd[1573]: time="2026-04-25T00:25:19.548071034Z" level=info msg="StartContainer for \"fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178\" returns successfully" Apr 25 00:25:19.556394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 00:25:19.556610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:25:19.556656 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:25:19.563712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:25:19.575284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:25:19.577298 containerd[1573]: time="2026-04-25T00:25:19.577248594Z" level=info msg="shim disconnected" id=fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178 namespace=k8s.io Apr 25 00:25:19.577371 containerd[1573]: time="2026-04-25T00:25:19.577299771Z" level=warning msg="cleaning up after shim disconnected" id=fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178 namespace=k8s.io Apr 25 00:25:19.577371 containerd[1573]: time="2026-04-25T00:25:19.577308108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:25:20.264719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04-rootfs.mount: Deactivated successfully. Apr 25 00:25:20.460040 kubelet[2671]: E0425 00:25:20.460009 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:20.465013 containerd[1573]: time="2026-04-25T00:25:20.464971117Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 25 00:25:20.479818 containerd[1573]: time="2026-04-25T00:25:20.479775088Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776\"" Apr 25 00:25:20.480150 containerd[1573]: time="2026-04-25T00:25:20.480110758Z" level=info msg="StartContainer for \"2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776\"" Apr 25 00:25:20.525518 containerd[1573]: time="2026-04-25T00:25:20.523504649Z" level=info msg="StartContainer for \"2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776\" returns successfully" Apr 25 00:25:20.544350 containerd[1573]: time="2026-04-25T00:25:20.544287250Z" level=info msg="shim disconnected" id=2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776 namespace=k8s.io Apr 25 00:25:20.544350 containerd[1573]: time="2026-04-25T00:25:20.544347914Z" level=warning msg="cleaning up after shim disconnected" id=2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776 namespace=k8s.io Apr 25 00:25:20.544539 containerd[1573]: time="2026-04-25T00:25:20.544359147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:25:20.614629 kubelet[2671]: E0425 00:25:20.614575 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:20.957655 containerd[1573]: time="2026-04-25T00:25:20.957612591Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:20.958569 containerd[1573]: time="2026-04-25T00:25:20.958518698Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 25 00:25:20.959677 containerd[1573]: time="2026-04-25T00:25:20.959645834Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:25:20.960676 containerd[1573]: time="2026-04-25T00:25:20.960652340Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.707658858s" Apr 25 00:25:20.960730 containerd[1573]: time="2026-04-25T00:25:20.960683602Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 25 00:25:20.964184 containerd[1573]: time="2026-04-25T00:25:20.964132065Z" level=info msg="CreateContainer within sandbox \"b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 25 00:25:20.974267 containerd[1573]: time="2026-04-25T00:25:20.974203052Z" level=info msg="CreateContainer within sandbox \"b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\"" Apr 25 00:25:20.974737 containerd[1573]: time="2026-04-25T00:25:20.974694782Z" level=info msg="StartContainer for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\"" Apr 25 00:25:21.018145 containerd[1573]: time="2026-04-25T00:25:21.018100661Z" level=info msg="StartContainer for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" returns successfully" Apr 25 00:25:21.267321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776-rootfs.mount: Deactivated successfully. Apr 25 00:25:21.463957 kubelet[2671]: E0425 00:25:21.463825 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:21.465211 kubelet[2671]: E0425 00:25:21.465159 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:21.501193 containerd[1573]: time="2026-04-25T00:25:21.501143985Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 25 00:25:21.510462 kubelet[2671]: I0425 00:25:21.510109 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2sfmf" podStartSLOduration=1.903271298 podStartE2EDuration="7.510091402s" podCreationTimestamp="2026-04-25 00:25:14 +0000 UTC" firstStartedPulling="2026-04-25 00:25:15.354700907 +0000 UTC m=+6.005220955" lastFinishedPulling="2026-04-25 00:25:20.961521011 +0000 UTC m=+11.612041059" observedRunningTime="2026-04-25 00:25:21.507029924 +0000 UTC m=+12.157549971" watchObservedRunningTime="2026-04-25 00:25:21.510091402 +0000 UTC m=+12.160611461" Apr 25 00:25:21.536519 containerd[1573]: time="2026-04-25T00:25:21.535428671Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99\"" Apr 25 00:25:21.536519 containerd[1573]: time="2026-04-25T00:25:21.536046836Z" level=info msg="StartContainer for \"a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99\"" Apr 25 00:25:21.604470 containerd[1573]: time="2026-04-25T00:25:21.602607263Z" level=info msg="StartContainer for \"a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99\" returns successfully" Apr 25 00:25:21.620680 containerd[1573]: time="2026-04-25T00:25:21.620586704Z" level=info msg="shim disconnected" id=a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99 namespace=k8s.io Apr 25 00:25:21.620680 containerd[1573]: time="2026-04-25T00:25:21.620671934Z" level=warning msg="cleaning up after shim disconnected" id=a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99 namespace=k8s.io Apr 25 00:25:21.620680 containerd[1573]: time="2026-04-25T00:25:21.620679093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:25:22.002403 kubelet[2671]: E0425 00:25:22.002323 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:22.266893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99-rootfs.mount: Deactivated successfully. Apr 25 00:25:22.471365 kubelet[2671]: E0425 00:25:22.471086 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:22.471365 kubelet[2671]: E0425 00:25:22.471195 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:22.471365 kubelet[2671]: E0425 00:25:22.471325 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:22.476805 containerd[1573]: time="2026-04-25T00:25:22.476751063Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 25 00:25:22.494716 containerd[1573]: time="2026-04-25T00:25:22.494660665Z" level=info msg="CreateContainer within sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\"" Apr 25 00:25:22.495773 containerd[1573]: time="2026-04-25T00:25:22.495059243Z" level=info msg="StartContainer for \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\"" Apr 25 00:25:22.544305 containerd[1573]: time="2026-04-25T00:25:22.544209228Z" level=info msg="StartContainer for \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\" returns successfully" Apr 25 00:25:22.696715 kubelet[2671]: I0425 00:25:22.696685 2671 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 25 00:25:22.902056 kubelet[2671]: I0425 00:25:22.902002 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4l86\" (UniqueName: \"kubernetes.io/projected/349ff000-6acf-47f6-abe9-2fc1bee38b4c-kube-api-access-d4l86\") pod \"coredns-674b8bbfcf-ltcfh\" (UID: \"349ff000-6acf-47f6-abe9-2fc1bee38b4c\") " pod="kube-system/coredns-674b8bbfcf-ltcfh" Apr 25 00:25:22.902056 kubelet[2671]: I0425 00:25:22.902049 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f410891-b342-4135-a9ac-32c81d349b24-config-volume\") pod \"coredns-674b8bbfcf-mwq2h\" (UID: \"5f410891-b342-4135-a9ac-32c81d349b24\") " pod="kube-system/coredns-674b8bbfcf-mwq2h" Apr 25 00:25:22.902056 kubelet[2671]: I0425 00:25:22.902075 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j99r\" (UniqueName: \"kubernetes.io/projected/5f410891-b342-4135-a9ac-32c81d349b24-kube-api-access-7j99r\") pod \"coredns-674b8bbfcf-mwq2h\" (UID: \"5f410891-b342-4135-a9ac-32c81d349b24\") " pod="kube-system/coredns-674b8bbfcf-mwq2h" Apr 25 00:25:22.902397 kubelet[2671]: I0425 00:25:22.902092 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/349ff000-6acf-47f6-abe9-2fc1bee38b4c-config-volume\") pod \"coredns-674b8bbfcf-ltcfh\" (UID: \"349ff000-6acf-47f6-abe9-2fc1bee38b4c\") " pod="kube-system/coredns-674b8bbfcf-ltcfh" Apr 25 00:25:23.025498 kubelet[2671]: E0425 00:25:23.025465 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:23.026239 containerd[1573]: time="2026-04-25T00:25:23.026193115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ltcfh,Uid:349ff000-6acf-47f6-abe9-2fc1bee38b4c,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:23.027651 kubelet[2671]: E0425 00:25:23.027587 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:23.028103 containerd[1573]: time="2026-04-25T00:25:23.028080129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwq2h,Uid:5f410891-b342-4135-a9ac-32c81d349b24,Namespace:kube-system,Attempt:0,}" Apr 25 00:25:23.484382 kubelet[2671]: E0425 00:25:23.484192 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:24.404491 systemd-networkd[1252]: cilium_host: Link UP Apr 25 00:25:24.404578 systemd-networkd[1252]: cilium_net: Link UP Apr 25 00:25:24.405348 systemd-networkd[1252]: cilium_net: Gained carrier Apr 25 00:25:24.405786 systemd-networkd[1252]: cilium_host: Gained carrier Apr 25 00:25:24.406006 systemd-networkd[1252]: cilium_net: Gained IPv6LL Apr 25 00:25:24.406147 systemd-networkd[1252]: cilium_host: Gained IPv6LL Apr 25 00:25:24.475070 systemd-networkd[1252]: cilium_vxlan: Link UP Apr 25 00:25:24.475074 systemd-networkd[1252]: cilium_vxlan: Gained carrier Apr 25 00:25:24.490821 kubelet[2671]: E0425 00:25:24.490780 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:24.641465 kernel: NET: Registered PF_ALG protocol family Apr 25 00:25:25.119659 systemd-networkd[1252]: lxc_health: Link UP Apr 25 00:25:25.128915 systemd-networkd[1252]: lxc_health: Gained carrier Apr 25 00:25:25.214516 kubelet[2671]: I0425 00:25:25.214253 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r5ps5" podStartSLOduration=7.211462191 podStartE2EDuration="11.214239702s" podCreationTimestamp="2026-04-25 00:25:14 +0000 UTC" firstStartedPulling="2026-04-25 00:25:15.25005564 +0000 UTC m=+5.900575704" lastFinishedPulling="2026-04-25 00:25:19.252833162 +0000 UTC m=+9.903353215" observedRunningTime="2026-04-25 00:25:23.498884489 +0000 UTC m=+14.149404548" watchObservedRunningTime="2026-04-25 00:25:25.214239702 +0000 UTC m=+15.864759760" Apr 25 00:25:25.493527 kubelet[2671]: E0425 00:25:25.493065 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:25.589009 systemd-networkd[1252]: lxc652c6866a755: Link UP Apr 25 00:25:25.598486 kernel: eth0: renamed from tmp077b5 Apr 25 00:25:25.608672 systemd-networkd[1252]: lxc652c6866a755: Gained carrier Apr 25 00:25:25.611904 systemd-networkd[1252]: lxc22cced1951d6: Link UP Apr 25 00:25:25.618453 kernel: eth0: renamed from tmp31256 Apr 25 00:25:25.625179 systemd-networkd[1252]: lxc22cced1951d6: Gained carrier Apr 25 00:25:26.257679 systemd-networkd[1252]: cilium_vxlan: Gained IPv6LL Apr 25 00:25:26.384660 systemd-networkd[1252]: lxc_health: Gained IPv6LL Apr 25 00:25:26.502810 kubelet[2671]: I0425 00:25:26.502775 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:25:26.503125 kubelet[2671]: E0425 00:25:26.503111 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:27.280803 systemd-networkd[1252]: lxc652c6866a755: Gained IPv6LL Apr 25 00:25:27.536640 systemd-networkd[1252]: lxc22cced1951d6: Gained IPv6LL Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374860819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374916743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374924794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374630136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374849199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374870320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.374934904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:28.377447 containerd[1573]: time="2026-04-25T00:25:28.375119269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:25:28.395838 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:25:28.396958 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 25 00:25:28.418334 containerd[1573]: time="2026-04-25T00:25:28.418291409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ltcfh,Uid:349ff000-6acf-47f6-abe9-2fc1bee38b4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"077b5819ab8f8e87656322e1718e7a6c7b11dbcd568c74ac2d64919dbc3fb66b\"" Apr 25 00:25:28.418912 kubelet[2671]: E0425 00:25:28.418881 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:28.424229 containerd[1573]: time="2026-04-25T00:25:28.424186756Z" level=info msg="CreateContainer within sandbox \"077b5819ab8f8e87656322e1718e7a6c7b11dbcd568c74ac2d64919dbc3fb66b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:25:28.427631 containerd[1573]: time="2026-04-25T00:25:28.427605813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mwq2h,Uid:5f410891-b342-4135-a9ac-32c81d349b24,Namespace:kube-system,Attempt:0,} returns sandbox id \"312560ae752f2a5a625cc83f91698bc06948fe6ba2d040532d32f3f1fc3ece3e\"" Apr 25 00:25:28.428127 kubelet[2671]: E0425 00:25:28.428110 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:28.431648 containerd[1573]: time="2026-04-25T00:25:28.431605524Z" level=info msg="CreateContainer within sandbox \"312560ae752f2a5a625cc83f91698bc06948fe6ba2d040532d32f3f1fc3ece3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:25:28.441947 containerd[1573]: time="2026-04-25T00:25:28.441905476Z" level=info msg="CreateContainer within sandbox \"077b5819ab8f8e87656322e1718e7a6c7b11dbcd568c74ac2d64919dbc3fb66b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83345695781c95c25a65448d79a79c669d58d213c0f637c8c9f2208549d89194\"" Apr 25 00:25:28.442298 containerd[1573]: time="2026-04-25T00:25:28.442265713Z" level=info msg="StartContainer for \"83345695781c95c25a65448d79a79c669d58d213c0f637c8c9f2208549d89194\"" Apr 25 00:25:28.449456 containerd[1573]: time="2026-04-25T00:25:28.449393777Z" level=info msg="CreateContainer within sandbox \"312560ae752f2a5a625cc83f91698bc06948fe6ba2d040532d32f3f1fc3ece3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee140919786fc54b4d5bbc510ed8811e5fef8174b455ee7ce7eebb44b991afc8\"" Apr 25 00:25:28.450400 containerd[1573]: time="2026-04-25T00:25:28.449858322Z" level=info msg="StartContainer for \"ee140919786fc54b4d5bbc510ed8811e5fef8174b455ee7ce7eebb44b991afc8\"" Apr 25 00:25:28.484544 containerd[1573]: time="2026-04-25T00:25:28.484512819Z" level=info msg="StartContainer for \"83345695781c95c25a65448d79a79c669d58d213c0f637c8c9f2208549d89194\" returns successfully" Apr 25 00:25:28.484649 containerd[1573]: time="2026-04-25T00:25:28.484568950Z" level=info msg="StartContainer for \"ee140919786fc54b4d5bbc510ed8811e5fef8174b455ee7ce7eebb44b991afc8\" returns successfully" Apr 25 00:25:28.508805 kubelet[2671]: E0425 00:25:28.508283 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:28.513947 kubelet[2671]: E0425 00:25:28.513892 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:28.521712 kubelet[2671]: I0425 00:25:28.521678 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mwq2h" podStartSLOduration=14.521666756 podStartE2EDuration="14.521666756s" podCreationTimestamp="2026-04-25 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:25:28.519466015 +0000 UTC m=+19.169986066" watchObservedRunningTime="2026-04-25 00:25:28.521666756 +0000 UTC m=+19.172186815" Apr 25 00:25:29.379285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601928119.mount: Deactivated successfully. Apr 25 00:25:29.515281 kubelet[2671]: E0425 00:25:29.515251 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:29.516631 kubelet[2671]: E0425 00:25:29.516394 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:29.532318 kubelet[2671]: I0425 00:25:29.532185 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ltcfh" podStartSLOduration=15.532126698999999 podStartE2EDuration="15.532126699s" podCreationTimestamp="2026-04-25 00:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:25:28.533563116 +0000 UTC m=+19.184083175" watchObservedRunningTime="2026-04-25 00:25:29.532126699 +0000 UTC m=+20.182646765" Apr 25 00:25:30.517212 kubelet[2671]: E0425 00:25:30.517159 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:30.517595 kubelet[2671]: E0425 00:25:30.517228 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:31.018828 kubelet[2671]: I0425 00:25:31.018765 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:25:31.019236 kubelet[2671]: E0425 00:25:31.019182 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:31.518804 kubelet[2671]: E0425 00:25:31.518774 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:25:33.141740 systemd[1]: Started sshd@7-10.0.0.3:22-10.0.0.1:56138.service - OpenSSH per-connection server daemon (10.0.0.1:56138). Apr 25 00:25:33.173944 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 56138 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:33.175127 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:33.178883 systemd-logind[1554]: New session 8 of user core. Apr 25 00:25:33.184646 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 25 00:25:33.412984 sshd[4064]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:33.415777 systemd[1]: sshd@7-10.0.0.3:22-10.0.0.1:56138.service: Deactivated successfully. Apr 25 00:25:33.417220 systemd[1]: session-8.scope: Deactivated successfully. Apr 25 00:25:33.417239 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Apr 25 00:25:33.418038 systemd-logind[1554]: Removed session 8. Apr 25 00:25:33.631346 update_engine[1557]: I20260425 00:25:33.631214 1557 update_attempter.cc:509] Updating boot flags... Apr 25 00:25:33.648569 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (4088) Apr 25 00:25:33.663706 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (4090) Apr 25 00:25:38.424847 systemd[1]: Started sshd@8-10.0.0.3:22-10.0.0.1:56150.service - OpenSSH per-connection server daemon (10.0.0.1:56150). Apr 25 00:25:38.456830 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 56150 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:38.458003 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:38.461688 systemd-logind[1554]: New session 9 of user core. Apr 25 00:25:38.473936 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 25 00:25:38.581406 sshd[4095]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:38.584345 systemd[1]: sshd@8-10.0.0.3:22-10.0.0.1:56150.service: Deactivated successfully. Apr 25 00:25:38.586137 systemd[1]: session-9.scope: Deactivated successfully. Apr 25 00:25:38.586800 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Apr 25 00:25:38.587738 systemd-logind[1554]: Removed session 9. Apr 25 00:25:43.591671 systemd[1]: Started sshd@9-10.0.0.3:22-10.0.0.1:39954.service - OpenSSH per-connection server daemon (10.0.0.1:39954). Apr 25 00:25:43.621364 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 39954 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:43.622779 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:43.626497 systemd-logind[1554]: New session 10 of user core. Apr 25 00:25:43.632708 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 25 00:25:43.731521 sshd[4112]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:43.736686 systemd[1]: Started sshd@10-10.0.0.3:22-10.0.0.1:39968.service - OpenSSH per-connection server daemon (10.0.0.1:39968). Apr 25 00:25:43.737020 systemd[1]: sshd@9-10.0.0.3:22-10.0.0.1:39954.service: Deactivated successfully. Apr 25 00:25:43.739225 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Apr 25 00:25:43.739271 systemd[1]: session-10.scope: Deactivated successfully. Apr 25 00:25:43.741207 systemd-logind[1554]: Removed session 10. Apr 25 00:25:43.768258 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 39968 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:43.769555 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:43.773063 systemd-logind[1554]: New session 11 of user core. Apr 25 00:25:43.782672 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 25 00:25:43.936276 sshd[4126]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:43.947742 systemd[1]: Started sshd@11-10.0.0.3:22-10.0.0.1:39970.service - OpenSSH per-connection server daemon (10.0.0.1:39970). Apr 25 00:25:43.948134 systemd[1]: sshd@10-10.0.0.3:22-10.0.0.1:39968.service: Deactivated successfully. Apr 25 00:25:43.950344 systemd[1]: session-11.scope: Deactivated successfully. Apr 25 00:25:43.955113 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Apr 25 00:25:43.957553 systemd-logind[1554]: Removed session 11. Apr 25 00:25:43.983388 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 39970 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:43.984560 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:43.988130 systemd-logind[1554]: New session 12 of user core. Apr 25 00:25:43.997730 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 25 00:25:44.094915 sshd[4139]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:44.097358 systemd[1]: sshd@11-10.0.0.3:22-10.0.0.1:39970.service: Deactivated successfully. Apr 25 00:25:44.098699 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Apr 25 00:25:44.098737 systemd[1]: session-12.scope: Deactivated successfully. Apr 25 00:25:44.099421 systemd-logind[1554]: Removed session 12. Apr 25 00:25:49.109763 systemd[1]: Started sshd@12-10.0.0.3:22-10.0.0.1:39978.service - OpenSSH per-connection server daemon (10.0.0.1:39978). Apr 25 00:25:49.137840 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 39978 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:49.139044 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:49.142299 systemd-logind[1554]: New session 13 of user core. Apr 25 00:25:49.153664 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 25 00:25:49.248215 sshd[4162]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:49.250763 systemd[1]: sshd@12-10.0.0.3:22-10.0.0.1:39978.service: Deactivated successfully. Apr 25 00:25:49.252235 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Apr 25 00:25:49.252276 systemd[1]: session-13.scope: Deactivated successfully. Apr 25 00:25:49.253185 systemd-logind[1554]: Removed session 13. Apr 25 00:25:54.258649 systemd[1]: Started sshd@13-10.0.0.3:22-10.0.0.1:60110.service - OpenSSH per-connection server daemon (10.0.0.1:60110). Apr 25 00:25:54.287382 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 60110 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:54.288397 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:54.291781 systemd-logind[1554]: New session 14 of user core. Apr 25 00:25:54.302637 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 25 00:25:54.395574 sshd[4178]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:54.401654 systemd[1]: Started sshd@14-10.0.0.3:22-10.0.0.1:60112.service - OpenSSH per-connection server daemon (10.0.0.1:60112). Apr 25 00:25:54.402016 systemd[1]: sshd@13-10.0.0.3:22-10.0.0.1:60110.service: Deactivated successfully. Apr 25 00:25:54.403207 systemd[1]: session-14.scope: Deactivated successfully. Apr 25 00:25:54.404297 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Apr 25 00:25:54.405183 systemd-logind[1554]: Removed session 14. Apr 25 00:25:54.433339 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 60112 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:54.434302 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:54.437650 systemd-logind[1554]: New session 15 of user core. Apr 25 00:25:54.445658 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 25 00:25:54.597148 sshd[4191]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:54.606699 systemd[1]: Started sshd@15-10.0.0.3:22-10.0.0.1:60122.service - OpenSSH per-connection server daemon (10.0.0.1:60122). Apr 25 00:25:54.607045 systemd[1]: sshd@14-10.0.0.3:22-10.0.0.1:60112.service: Deactivated successfully. Apr 25 00:25:54.609172 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Apr 25 00:25:54.609237 systemd[1]: session-15.scope: Deactivated successfully. Apr 25 00:25:54.610528 systemd-logind[1554]: Removed session 15. Apr 25 00:25:54.634960 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 60122 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:54.636263 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:54.639498 systemd-logind[1554]: New session 16 of user core. Apr 25 00:25:54.648627 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 25 00:25:55.181778 sshd[4204]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:55.183788 systemd[1]: sshd@15-10.0.0.3:22-10.0.0.1:60122.service: Deactivated successfully. Apr 25 00:25:55.186253 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Apr 25 00:25:55.188824 systemd[1]: session-16.scope: Deactivated successfully. Apr 25 00:25:55.195317 systemd[1]: Started sshd@16-10.0.0.3:22-10.0.0.1:60134.service - OpenSSH per-connection server daemon (10.0.0.1:60134). Apr 25 00:25:55.196025 systemd-logind[1554]: Removed session 16. Apr 25 00:25:55.231324 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 60134 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:55.232485 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:55.235939 systemd-logind[1554]: New session 17 of user core. Apr 25 00:25:55.253737 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 25 00:25:55.482818 sshd[4228]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:55.488652 systemd[1]: Started sshd@17-10.0.0.3:22-10.0.0.1:60148.service - OpenSSH per-connection server daemon (10.0.0.1:60148). Apr 25 00:25:55.489879 systemd[1]: sshd@16-10.0.0.3:22-10.0.0.1:60134.service: Deactivated successfully. Apr 25 00:25:55.492969 systemd[1]: session-17.scope: Deactivated successfully. Apr 25 00:25:55.493172 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Apr 25 00:25:55.494763 systemd-logind[1554]: Removed session 17. Apr 25 00:25:55.520229 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 60148 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:25:55.521339 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:25:55.524881 systemd-logind[1554]: New session 18 of user core. Apr 25 00:25:55.533644 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 25 00:25:55.632427 sshd[4240]: pam_unix(sshd:session): session closed for user core Apr 25 00:25:55.635082 systemd[1]: sshd@17-10.0.0.3:22-10.0.0.1:60148.service: Deactivated successfully. Apr 25 00:25:55.636537 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Apr 25 00:25:55.636558 systemd[1]: session-18.scope: Deactivated successfully. Apr 25 00:25:55.637341 systemd-logind[1554]: Removed session 18. Apr 25 00:26:00.645691 systemd[1]: Started sshd@18-10.0.0.3:22-10.0.0.1:42422.service - OpenSSH per-connection server daemon (10.0.0.1:42422). Apr 25 00:26:00.674205 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 42422 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:00.675513 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:00.678782 systemd-logind[1554]: New session 19 of user core. Apr 25 00:26:00.684648 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 00:26:00.780688 sshd[4260]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:00.783160 systemd[1]: sshd@18-10.0.0.3:22-10.0.0.1:42422.service: Deactivated successfully. Apr 25 00:26:00.784560 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Apr 25 00:26:00.784610 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 00:26:00.785575 systemd-logind[1554]: Removed session 19. Apr 25 00:26:05.792706 systemd[1]: Started sshd@19-10.0.0.3:22-10.0.0.1:42428.service - OpenSSH per-connection server daemon (10.0.0.1:42428). Apr 25 00:26:05.821674 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 42428 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:05.822808 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:05.826227 systemd-logind[1554]: New session 20 of user core. Apr 25 00:26:05.833648 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 00:26:05.932070 sshd[4275]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:05.934720 systemd[1]: sshd@19-10.0.0.3:22-10.0.0.1:42428.service: Deactivated successfully. Apr 25 00:26:05.936298 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Apr 25 00:26:05.936316 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 00:26:05.937124 systemd-logind[1554]: Removed session 20. Apr 25 00:26:10.951697 systemd[1]: Started sshd@20-10.0.0.3:22-10.0.0.1:34044.service - OpenSSH per-connection server daemon (10.0.0.1:34044). Apr 25 00:26:10.982284 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 34044 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:10.983487 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:10.986973 systemd-logind[1554]: New session 21 of user core. Apr 25 00:26:10.995652 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 25 00:26:11.089013 sshd[4292]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:11.101772 systemd[1]: Started sshd@21-10.0.0.3:22-10.0.0.1:34060.service - OpenSSH per-connection server daemon (10.0.0.1:34060). Apr 25 00:26:11.102493 systemd[1]: sshd@20-10.0.0.3:22-10.0.0.1:34044.service: Deactivated successfully. Apr 25 00:26:11.104311 systemd[1]: session-21.scope: Deactivated successfully. Apr 25 00:26:11.105592 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Apr 25 00:26:11.106577 systemd-logind[1554]: Removed session 21. Apr 25 00:26:11.132569 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 34060 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:11.133730 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:11.137416 systemd-logind[1554]: New session 22 of user core. Apr 25 00:26:11.143662 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 25 00:26:12.467044 containerd[1573]: time="2026-04-25T00:26:12.466978627Z" level=info msg="StopContainer for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" with timeout 30 (s)" Apr 25 00:26:12.468160 containerd[1573]: time="2026-04-25T00:26:12.468133777Z" level=info msg="Stop container \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" with signal terminated" Apr 25 00:26:12.499885 containerd[1573]: time="2026-04-25T00:26:12.499844010Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 00:26:12.507087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413-rootfs.mount: Deactivated successfully. Apr 25 00:26:12.507801 containerd[1573]: time="2026-04-25T00:26:12.507102733Z" level=info msg="StopContainer for \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\" with timeout 2 (s)" Apr 25 00:26:12.508300 containerd[1573]: time="2026-04-25T00:26:12.508221697Z" level=info msg="Stop container \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\" with signal terminated" Apr 25 00:26:12.512918 containerd[1573]: time="2026-04-25T00:26:12.512834753Z" level=info msg="shim disconnected" id=8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413 namespace=k8s.io Apr 25 00:26:12.512918 containerd[1573]: time="2026-04-25T00:26:12.512878682Z" level=warning msg="cleaning up after shim disconnected" id=8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413 namespace=k8s.io Apr 25 00:26:12.512918 containerd[1573]: time="2026-04-25T00:26:12.512886279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:12.515281 systemd-networkd[1252]: lxc_health: Link DOWN Apr 25 00:26:12.515296 systemd-networkd[1252]: lxc_health: Lost carrier Apr 25 00:26:12.528905 containerd[1573]: time="2026-04-25T00:26:12.528871574Z" level=info msg="StopContainer for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" returns successfully" Apr 25 00:26:12.531925 containerd[1573]: time="2026-04-25T00:26:12.531876302Z" level=info msg="StopPodSandbox for \"b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994\"" Apr 25 00:26:12.531925 containerd[1573]: time="2026-04-25T00:26:12.531922258Z" level=info msg="Container to stop \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:26:12.533813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994-shm.mount: Deactivated successfully. Apr 25 00:26:12.552528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed-rootfs.mount: Deactivated successfully. Apr 25 00:26:12.557895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994-rootfs.mount: Deactivated successfully. Apr 25 00:26:12.561584 containerd[1573]: time="2026-04-25T00:26:12.561517735Z" level=info msg="shim disconnected" id=369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed namespace=k8s.io Apr 25 00:26:12.561584 containerd[1573]: time="2026-04-25T00:26:12.561557316Z" level=warning msg="cleaning up after shim disconnected" id=369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed namespace=k8s.io Apr 25 00:26:12.561584 containerd[1573]: time="2026-04-25T00:26:12.561564525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:12.561760 containerd[1573]: time="2026-04-25T00:26:12.561522057Z" level=info msg="shim disconnected" id=b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994 namespace=k8s.io Apr 25 00:26:12.561760 containerd[1573]: time="2026-04-25T00:26:12.561682238Z" level=warning msg="cleaning up after shim disconnected" id=b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994 namespace=k8s.io Apr 25 00:26:12.561760 containerd[1573]: time="2026-04-25T00:26:12.561688331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:12.572086 containerd[1573]: time="2026-04-25T00:26:12.572045734Z" level=warning msg="cleanup warnings time=\"2026-04-25T00:26:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 25 00:26:12.573014 containerd[1573]: time="2026-04-25T00:26:12.572976663Z" level=info msg="TearDown network for sandbox \"b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994\" successfully" Apr 25 00:26:12.573014 containerd[1573]: time="2026-04-25T00:26:12.573001987Z" level=info msg="StopPodSandbox for \"b5f34d7924387d6d8400fe190c6735157e872a59cb353190e63e2db96b46a994\" returns successfully" Apr 25 00:26:12.576216 containerd[1573]: time="2026-04-25T00:26:12.576189401Z" level=info msg="StopContainer for \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\" returns successfully" Apr 25 00:26:12.576612 containerd[1573]: time="2026-04-25T00:26:12.576589631Z" level=info msg="StopPodSandbox for \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\"" Apr 25 00:26:12.576665 containerd[1573]: time="2026-04-25T00:26:12.576622637Z" level=info msg="Container to stop \"2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:26:12.576665 containerd[1573]: time="2026-04-25T00:26:12.576631613Z" level=info msg="Container to stop \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:26:12.576665 containerd[1573]: time="2026-04-25T00:26:12.576638248Z" level=info msg="Container to stop \"a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:26:12.576665 containerd[1573]: time="2026-04-25T00:26:12.576644848Z" level=info msg="Container to stop \"4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:26:12.576665 containerd[1573]: time="2026-04-25T00:26:12.576651394Z" level=info msg="Container to stop \"fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:26:12.596924 kubelet[2671]: I0425 00:26:12.596867 2671 scope.go:117] "RemoveContainer" containerID="8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413" Apr 25 00:26:12.601086 containerd[1573]: time="2026-04-25T00:26:12.600831426Z" level=info msg="shim disconnected" id=f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d namespace=k8s.io Apr 25 00:26:12.601086 containerd[1573]: time="2026-04-25T00:26:12.600886680Z" level=warning msg="cleaning up after shim disconnected" id=f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d namespace=k8s.io Apr 25 00:26:12.601086 containerd[1573]: time="2026-04-25T00:26:12.600893407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:12.603390 containerd[1573]: time="2026-04-25T00:26:12.603334383Z" level=info msg="RemoveContainer for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\"" Apr 25 00:26:12.603480 kubelet[2671]: I0425 00:26:12.603342 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5827493-c556-401e-8cf3-0a139df33bc9-cilium-config-path\") pod \"e5827493-c556-401e-8cf3-0a139df33bc9\" (UID: \"e5827493-c556-401e-8cf3-0a139df33bc9\") " Apr 25 00:26:12.603480 kubelet[2671]: I0425 00:26:12.603378 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmgz9\" (UniqueName: \"kubernetes.io/projected/e5827493-c556-401e-8cf3-0a139df33bc9-kube-api-access-gmgz9\") pod \"e5827493-c556-401e-8cf3-0a139df33bc9\" (UID: \"e5827493-c556-401e-8cf3-0a139df33bc9\") " Apr 25 00:26:12.607380 kubelet[2671]: I0425 00:26:12.607351 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5827493-c556-401e-8cf3-0a139df33bc9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5827493-c556-401e-8cf3-0a139df33bc9" (UID: "e5827493-c556-401e-8cf3-0a139df33bc9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:26:12.608801 containerd[1573]: time="2026-04-25T00:26:12.608757636Z" level=info msg="RemoveContainer for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" returns successfully" Apr 25 00:26:12.611243 kubelet[2671]: I0425 00:26:12.611119 2671 scope.go:117] "RemoveContainer" containerID="8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413" Apr 25 00:26:12.611376 containerd[1573]: time="2026-04-25T00:26:12.611337868Z" level=error msg="ContainerStatus for \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\": not found" Apr 25 00:26:12.611404 kubelet[2671]: I0425 00:26:12.611368 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5827493-c556-401e-8cf3-0a139df33bc9-kube-api-access-gmgz9" (OuterVolumeSpecName: "kube-api-access-gmgz9") pod "e5827493-c556-401e-8cf3-0a139df33bc9" (UID: "e5827493-c556-401e-8cf3-0a139df33bc9"). InnerVolumeSpecName "kube-api-access-gmgz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:26:12.616807 kubelet[2671]: E0425 00:26:12.616787 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\": not found" containerID="8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413" Apr 25 00:26:12.616868 kubelet[2671]: I0425 00:26:12.616816 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413"} err="failed to get container status \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ee0ea22125de5c24eee60e29ab2492c6e2a2189e5dcc8f04b42d4f8232e8413\": not found" Apr 25 00:26:12.618194 containerd[1573]: time="2026-04-25T00:26:12.618167889Z" level=info msg="TearDown network for sandbox \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" successfully" Apr 25 00:26:12.618194 containerd[1573]: time="2026-04-25T00:26:12.618189233Z" level=info msg="StopPodSandbox for \"f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d\" returns successfully" Apr 25 00:26:12.704005 kubelet[2671]: I0425 00:26:12.703931 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-etc-cni-netd\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704005 kubelet[2671]: I0425 00:26:12.703987 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cni-path\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704005 kubelet[2671]: I0425 00:26:12.704012 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32605613-06bd-4153-a665-f07a955ada75-cilium-config-path\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704179 kubelet[2671]: I0425 00:26:12.704034 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-kernel\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704179 kubelet[2671]: I0425 00:26:12.704047 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-cgroup\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704179 kubelet[2671]: I0425 00:26:12.704059 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-xtables-lock\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704179 kubelet[2671]: I0425 00:26:12.704072 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-lib-modules\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704179 kubelet[2671]: I0425 00:26:12.704062 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704179 kubelet[2671]: I0425 00:26:12.704085 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-run\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704310 kubelet[2671]: I0425 00:26:12.704084 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704310 kubelet[2671]: I0425 00:26:12.704101 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsstx\" (UniqueName: \"kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-kube-api-access-dsstx\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704310 kubelet[2671]: I0425 00:26:12.704114 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-hostproc\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704310 kubelet[2671]: I0425 00:26:12.704113 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704310 kubelet[2671]: I0425 00:26:12.704126 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-bpf-maps\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704414 kubelet[2671]: I0425 00:26:12.704130 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704414 kubelet[2671]: I0425 00:26:12.704141 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704414 kubelet[2671]: I0425 00:26:12.704142 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32605613-06bd-4153-a665-f07a955ada75-clustermesh-secrets\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704414 kubelet[2671]: I0425 00:26:12.704151 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704414 kubelet[2671]: I0425 00:26:12.704157 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-hubble-tls\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704162 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-hostproc" (OuterVolumeSpecName: "hostproc") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704170 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-net\") pod \"32605613-06bd-4153-a665-f07a955ada75\" (UID: \"32605613-06bd-4153-a665-f07a955ada75\") " Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704198 2671 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704207 2671 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704216 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5827493-c556-401e-8cf3-0a139df33bc9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704223 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704551 kubelet[2671]: I0425 00:26:12.704232 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704687 kubelet[2671]: I0425 00:26:12.704239 2671 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704687 kubelet[2671]: I0425 00:26:12.704245 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmgz9\" (UniqueName: \"kubernetes.io/projected/e5827493-c556-401e-8cf3-0a139df33bc9-kube-api-access-gmgz9\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704687 kubelet[2671]: I0425 00:26:12.704254 2671 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704687 kubelet[2671]: I0425 00:26:12.704260 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.704687 kubelet[2671]: I0425 00:26:12.704281 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704687 kubelet[2671]: I0425 00:26:12.704298 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.704796 kubelet[2671]: I0425 00:26:12.704386 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cni-path" (OuterVolumeSpecName: "cni-path") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:26:12.705998 kubelet[2671]: I0425 00:26:12.705937 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32605613-06bd-4153-a665-f07a955ada75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:26:12.706693 kubelet[2671]: I0425 00:26:12.706568 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-kube-api-access-dsstx" (OuterVolumeSpecName: "kube-api-access-dsstx") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "kube-api-access-dsstx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:26:12.706854 kubelet[2671]: I0425 00:26:12.706828 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32605613-06bd-4153-a665-f07a955ada75-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 25 00:26:12.707179 kubelet[2671]: I0425 00:26:12.707155 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "32605613-06bd-4153-a665-f07a955ada75" (UID: "32605613-06bd-4153-a665-f07a955ada75"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804831 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32605613-06bd-4153-a665-f07a955ada75-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804872 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsstx\" (UniqueName: \"kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-kube-api-access-dsstx\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804884 2671 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804896 2671 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32605613-06bd-4153-a665-f07a955ada75-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804904 2671 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32605613-06bd-4153-a665-f07a955ada75-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804912 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:12.804991 kubelet[2671]: I0425 00:26:12.804919 2671 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32605613-06bd-4153-a665-f07a955ada75-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 25 00:26:13.430298 kubelet[2671]: I0425 00:26:13.430227 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5827493-c556-401e-8cf3-0a139df33bc9" path="/var/lib/kubelet/pods/e5827493-c556-401e-8cf3-0a139df33bc9/volumes" Apr 25 00:26:13.482931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d-rootfs.mount: Deactivated successfully. Apr 25 00:26:13.483079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f567b701e0c35b121ccf789b7c94522df24ccc81bf671a5ad765e8adb7e6c31d-shm.mount: Deactivated successfully. Apr 25 00:26:13.483164 systemd[1]: var-lib-kubelet-pods-e5827493\x2dc556\x2d401e\x2d8cf3\x2d0a139df33bc9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmgz9.mount: Deactivated successfully. Apr 25 00:26:13.483247 systemd[1]: var-lib-kubelet-pods-32605613\x2d06bd\x2d4153\x2da665\x2df07a955ada75-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 25 00:26:13.483324 systemd[1]: var-lib-kubelet-pods-32605613\x2d06bd\x2d4153\x2da665\x2df07a955ada75-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 25 00:26:13.483393 systemd[1]: var-lib-kubelet-pods-32605613\x2d06bd\x2d4153\x2da665\x2df07a955ada75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddsstx.mount: Deactivated successfully. Apr 25 00:26:13.621097 kubelet[2671]: I0425 00:26:13.621070 2671 scope.go:117] "RemoveContainer" containerID="369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed" Apr 25 00:26:13.622164 containerd[1573]: time="2026-04-25T00:26:13.622142551Z" level=info msg="RemoveContainer for \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\"" Apr 25 00:26:13.626050 containerd[1573]: time="2026-04-25T00:26:13.625972790Z" level=info msg="RemoveContainer for \"369dc7ba5bb0d670b1d222bb1d57d0dd20a0238c89a60c51e0d201eb95a228ed\" returns successfully" Apr 25 00:26:13.626487 kubelet[2671]: I0425 00:26:13.626460 2671 scope.go:117] "RemoveContainer" containerID="a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99" Apr 25 00:26:13.628010 containerd[1573]: time="2026-04-25T00:26:13.627463926Z" level=info msg="RemoveContainer for \"a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99\"" Apr 25 00:26:13.630275 containerd[1573]: time="2026-04-25T00:26:13.630235529Z" level=info msg="RemoveContainer for \"a6cc035c61afe5cd1039906b5180791bb5528965f11805ab811ab6a7a2c63e99\" returns successfully" Apr 25 00:26:13.630458 kubelet[2671]: I0425 00:26:13.630400 2671 scope.go:117] "RemoveContainer" containerID="2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776" Apr 25 00:26:13.631312 containerd[1573]: time="2026-04-25T00:26:13.631287893Z" level=info msg="RemoveContainer for \"2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776\"" Apr 25 00:26:13.634520 containerd[1573]: time="2026-04-25T00:26:13.634498304Z" level=info msg="RemoveContainer for \"2ac5d11afeb482b06d811968dbfa5398abceb9c008252c6d2f0c2de5b080f776\" returns successfully" Apr 25 00:26:13.634720 kubelet[2671]: I0425 00:26:13.634697 2671 scope.go:117] "RemoveContainer" containerID="fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178" Apr 25 00:26:13.636127 containerd[1573]: time="2026-04-25T00:26:13.636104651Z" level=info msg="RemoveContainer for \"fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178\"" Apr 25 00:26:13.639959 containerd[1573]: time="2026-04-25T00:26:13.639919152Z" level=info msg="RemoveContainer for \"fd013353837040bb7b15d4d5e2371d651239c241ad710327b20ac23e167fe178\" returns successfully" Apr 25 00:26:13.640113 kubelet[2671]: I0425 00:26:13.640094 2671 scope.go:117] "RemoveContainer" containerID="4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04" Apr 25 00:26:13.640899 containerd[1573]: time="2026-04-25T00:26:13.640879770Z" level=info msg="RemoveContainer for \"4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04\"" Apr 25 00:26:13.643872 containerd[1573]: time="2026-04-25T00:26:13.643839796Z" level=info msg="RemoveContainer for \"4bf031913025222994d635c5f2a8366986b8a3b47af59a834504a567fd801d04\" returns successfully" Apr 25 00:26:14.433504 sshd[4306]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:14.442687 systemd[1]: Started sshd@22-10.0.0.3:22-10.0.0.1:34064.service - OpenSSH per-connection server daemon (10.0.0.1:34064). Apr 25 00:26:14.443041 systemd[1]: sshd@21-10.0.0.3:22-10.0.0.1:34060.service: Deactivated successfully. Apr 25 00:26:14.445568 systemd[1]: session-22.scope: Deactivated successfully. Apr 25 00:26:14.446254 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Apr 25 00:26:14.447525 systemd-logind[1554]: Removed session 22. Apr 25 00:26:14.466826 kubelet[2671]: E0425 00:26:14.466794 2671 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 25 00:26:14.477079 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 34064 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:14.478281 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:14.481633 systemd-logind[1554]: New session 23 of user core. Apr 25 00:26:14.487626 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 25 00:26:15.039770 sshd[4472]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:15.050601 systemd[1]: Started sshd@23-10.0.0.3:22-10.0.0.1:34072.service - OpenSSH per-connection server daemon (10.0.0.1:34072). Apr 25 00:26:15.051083 systemd[1]: sshd@22-10.0.0.3:22-10.0.0.1:34064.service: Deactivated successfully. Apr 25 00:26:15.052676 systemd[1]: session-23.scope: Deactivated successfully. Apr 25 00:26:15.055257 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Apr 25 00:26:15.058756 systemd-logind[1554]: Removed session 23. Apr 25 00:26:15.086468 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 34072 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:15.087325 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:15.092855 systemd-logind[1554]: New session 24 of user core. Apr 25 00:26:15.102786 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 25 00:26:15.117521 kubelet[2671]: I0425 00:26:15.117471 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-etc-cni-netd\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117521 kubelet[2671]: I0425 00:26:15.117501 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-cilium-config-path\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117521 kubelet[2671]: I0425 00:26:15.117516 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb7hg\" (UniqueName: \"kubernetes.io/projected/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-kube-api-access-cb7hg\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117892 kubelet[2671]: I0425 00:26:15.117527 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-bpf-maps\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117892 kubelet[2671]: I0425 00:26:15.117540 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-cilium-run\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117892 kubelet[2671]: I0425 00:26:15.117551 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-cilium-cgroup\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117892 kubelet[2671]: I0425 00:26:15.117562 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-host-proc-sys-kernel\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117892 kubelet[2671]: I0425 00:26:15.117645 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-cni-path\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.117892 kubelet[2671]: I0425 00:26:15.117670 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-cilium-ipsec-secrets\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.118010 kubelet[2671]: I0425 00:26:15.117695 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-lib-modules\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.118010 kubelet[2671]: I0425 00:26:15.117722 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-clustermesh-secrets\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.118010 kubelet[2671]: I0425 00:26:15.117742 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-hubble-tls\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.118010 kubelet[2671]: I0425 00:26:15.117762 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-hostproc\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.118010 kubelet[2671]: I0425 00:26:15.117773 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-host-proc-sys-net\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.118010 kubelet[2671]: I0425 00:26:15.117788 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3-xtables-lock\") pod \"cilium-qz8xv\" (UID: \"6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3\") " pod="kube-system/cilium-qz8xv" Apr 25 00:26:15.152251 sshd[4487]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:15.158653 systemd[1]: Started sshd@24-10.0.0.3:22-10.0.0.1:34080.service - OpenSSH per-connection server daemon (10.0.0.1:34080). Apr 25 00:26:15.158951 systemd[1]: sshd@23-10.0.0.3:22-10.0.0.1:34072.service: Deactivated successfully. Apr 25 00:26:15.160289 systemd[1]: session-24.scope: Deactivated successfully. Apr 25 00:26:15.161558 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Apr 25 00:26:15.162659 systemd-logind[1554]: Removed session 24. Apr 25 00:26:15.186940 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 34080 ssh2: RSA SHA256:uRTsnPONmBUl48stbjd/ikyEKbfOzbiYL04dRfHHovc Apr 25 00:26:15.188058 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:26:15.191401 systemd-logind[1554]: New session 25 of user core. Apr 25 00:26:15.202676 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 25 00:26:15.361718 kubelet[2671]: E0425 00:26:15.361675 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:15.362567 containerd[1573]: time="2026-04-25T00:26:15.362500746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qz8xv,Uid:6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3,Namespace:kube-system,Attempt:0,}" Apr 25 00:26:15.381713 containerd[1573]: time="2026-04-25T00:26:15.380397095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:26:15.381713 containerd[1573]: time="2026-04-25T00:26:15.380931617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:26:15.381713 containerd[1573]: time="2026-04-25T00:26:15.380949302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:26:15.381713 containerd[1573]: time="2026-04-25T00:26:15.381166217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:26:15.413547 containerd[1573]: time="2026-04-25T00:26:15.413490577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qz8xv,Uid:6e38a6d4-e5ce-4f75-8e7d-9241fb10c3c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\"" Apr 25 00:26:15.414079 kubelet[2671]: E0425 00:26:15.414054 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:15.418947 containerd[1573]: time="2026-04-25T00:26:15.418923874Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 25 00:26:15.431115 kubelet[2671]: I0425 00:26:15.431038 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32605613-06bd-4153-a665-f07a955ada75" path="/var/lib/kubelet/pods/32605613-06bd-4153-a665-f07a955ada75/volumes" Apr 25 00:26:15.433422 containerd[1573]: time="2026-04-25T00:26:15.433357287Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6bba2865bc409b5c226f6167f9973a37b9a546f663b7917de4b87aa8326f114\"" Apr 25 00:26:15.434641 containerd[1573]: time="2026-04-25T00:26:15.433927556Z" level=info msg="StartContainer for \"a6bba2865bc409b5c226f6167f9973a37b9a546f663b7917de4b87aa8326f114\"" Apr 25 00:26:15.481211 containerd[1573]: time="2026-04-25T00:26:15.481166847Z" level=info msg="StartContainer for \"a6bba2865bc409b5c226f6167f9973a37b9a546f663b7917de4b87aa8326f114\" returns successfully" Apr 25 00:26:15.512093 containerd[1573]: time="2026-04-25T00:26:15.512030284Z" level=info msg="shim disconnected" id=a6bba2865bc409b5c226f6167f9973a37b9a546f663b7917de4b87aa8326f114 namespace=k8s.io Apr 25 00:26:15.512093 containerd[1573]: time="2026-04-25T00:26:15.512091068Z" level=warning msg="cleaning up after shim disconnected" id=a6bba2865bc409b5c226f6167f9973a37b9a546f663b7917de4b87aa8326f114 namespace=k8s.io Apr 25 00:26:15.512093 containerd[1573]: time="2026-04-25T00:26:15.512099157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:15.630350 kubelet[2671]: E0425 00:26:15.627421 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:15.634731 containerd[1573]: time="2026-04-25T00:26:15.634695582Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 25 00:26:15.646235 containerd[1573]: time="2026-04-25T00:26:15.646189135Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7dba91231f51ef349ddd2db4eea61ca3ac5030f5df16469bb631345fec4e9714\"" Apr 25 00:26:15.646664 containerd[1573]: time="2026-04-25T00:26:15.646638873Z" level=info msg="StartContainer for \"7dba91231f51ef349ddd2db4eea61ca3ac5030f5df16469bb631345fec4e9714\"" Apr 25 00:26:15.683921 containerd[1573]: time="2026-04-25T00:26:15.683889117Z" level=info msg="StartContainer for \"7dba91231f51ef349ddd2db4eea61ca3ac5030f5df16469bb631345fec4e9714\" returns successfully" Apr 25 00:26:15.703505 containerd[1573]: time="2026-04-25T00:26:15.703409389Z" level=info msg="shim disconnected" id=7dba91231f51ef349ddd2db4eea61ca3ac5030f5df16469bb631345fec4e9714 namespace=k8s.io Apr 25 00:26:15.703505 containerd[1573]: time="2026-04-25T00:26:15.703499037Z" level=warning msg="cleaning up after shim disconnected" id=7dba91231f51ef349ddd2db4eea61ca3ac5030f5df16469bb631345fec4e9714 namespace=k8s.io Apr 25 00:26:15.703505 containerd[1573]: time="2026-04-25T00:26:15.703506192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:16.630639 kubelet[2671]: E0425 00:26:16.630522 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:16.634499 containerd[1573]: time="2026-04-25T00:26:16.634458334Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 25 00:26:16.647587 containerd[1573]: time="2026-04-25T00:26:16.647533609Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"53edd2c762d35cb5381af7370f5bf00d1a3220877b9841f58a13fade218713a1\"" Apr 25 00:26:16.648070 containerd[1573]: time="2026-04-25T00:26:16.647982016Z" level=info msg="StartContainer for \"53edd2c762d35cb5381af7370f5bf00d1a3220877b9841f58a13fade218713a1\"" Apr 25 00:26:16.703793 containerd[1573]: time="2026-04-25T00:26:16.703750866Z" level=info msg="StartContainer for \"53edd2c762d35cb5381af7370f5bf00d1a3220877b9841f58a13fade218713a1\" returns successfully" Apr 25 00:26:16.723336 containerd[1573]: time="2026-04-25T00:26:16.723287241Z" level=info msg="shim disconnected" id=53edd2c762d35cb5381af7370f5bf00d1a3220877b9841f58a13fade218713a1 namespace=k8s.io Apr 25 00:26:16.723336 containerd[1573]: time="2026-04-25T00:26:16.723326047Z" level=warning msg="cleaning up after shim disconnected" id=53edd2c762d35cb5381af7370f5bf00d1a3220877b9841f58a13fade218713a1 namespace=k8s.io Apr 25 00:26:16.723336 containerd[1573]: time="2026-04-25T00:26:16.723332056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:17.222160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53edd2c762d35cb5381af7370f5bf00d1a3220877b9841f58a13fade218713a1-rootfs.mount: Deactivated successfully. Apr 25 00:26:17.635457 kubelet[2671]: E0425 00:26:17.635392 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:17.639137 containerd[1573]: time="2026-04-25T00:26:17.639100402Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 25 00:26:17.650457 containerd[1573]: time="2026-04-25T00:26:17.650391831Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96f67c8280079e003c17c700866c71d06ca0291b4ba72762cbc46f5d0ba562a9\"" Apr 25 00:26:17.650799 containerd[1573]: time="2026-04-25T00:26:17.650762467Z" level=info msg="StartContainer for \"96f67c8280079e003c17c700866c71d06ca0291b4ba72762cbc46f5d0ba562a9\"" Apr 25 00:26:17.692446 containerd[1573]: time="2026-04-25T00:26:17.692394371Z" level=info msg="StartContainer for \"96f67c8280079e003c17c700866c71d06ca0291b4ba72762cbc46f5d0ba562a9\" returns successfully" Apr 25 00:26:17.707913 containerd[1573]: time="2026-04-25T00:26:17.707860794Z" level=info msg="shim disconnected" id=96f67c8280079e003c17c700866c71d06ca0291b4ba72762cbc46f5d0ba562a9 namespace=k8s.io Apr 25 00:26:17.707913 containerd[1573]: time="2026-04-25T00:26:17.707908041Z" level=warning msg="cleaning up after shim disconnected" id=96f67c8280079e003c17c700866c71d06ca0291b4ba72762cbc46f5d0ba562a9 namespace=k8s.io Apr 25 00:26:17.707913 containerd[1573]: time="2026-04-25T00:26:17.707915659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:26:18.222388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96f67c8280079e003c17c700866c71d06ca0291b4ba72762cbc46f5d0ba562a9-rootfs.mount: Deactivated successfully. Apr 25 00:26:18.639623 kubelet[2671]: E0425 00:26:18.639571 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:18.643657 containerd[1573]: time="2026-04-25T00:26:18.643608691Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 25 00:26:18.655807 containerd[1573]: time="2026-04-25T00:26:18.655760167Z" level=info msg="CreateContainer within sandbox \"b6a045eb9ccb8ad0dd4a30d0abb3ef9f19c5bc923e3f94f4962e2dd7d6acf240\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd81a83753551566fc28c73ed44765ce0639d5c83674a1df5857e329cb224412\"" Apr 25 00:26:18.656185 containerd[1573]: time="2026-04-25T00:26:18.656163333Z" level=info msg="StartContainer for \"cd81a83753551566fc28c73ed44765ce0639d5c83674a1df5857e329cb224412\"" Apr 25 00:26:18.697845 containerd[1573]: time="2026-04-25T00:26:18.697812946Z" level=info msg="StartContainer for \"cd81a83753551566fc28c73ed44765ce0639d5c83674a1df5857e329cb224412\" returns successfully" Apr 25 00:26:18.901577 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 25 00:26:19.643358 kubelet[2671]: E0425 00:26:19.643295 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:19.654203 kubelet[2671]: I0425 00:26:19.654156 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qz8xv" podStartSLOduration=4.6541451 podStartE2EDuration="4.6541451s" podCreationTimestamp="2026-04-25 00:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:26:19.653893301 +0000 UTC m=+70.304413372" watchObservedRunningTime="2026-04-25 00:26:19.6541451 +0000 UTC m=+70.304665171" Apr 25 00:26:21.363471 kubelet[2671]: E0425 00:26:21.363339 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:21.392863 systemd[1]: run-containerd-runc-k8s.io-cd81a83753551566fc28c73ed44765ce0639d5c83674a1df5857e329cb224412-runc.6Hjpua.mount: Deactivated successfully. Apr 25 00:26:21.545575 systemd-networkd[1252]: lxc_health: Link UP Apr 25 00:26:21.552179 systemd-networkd[1252]: lxc_health: Gained carrier Apr 25 00:26:23.347086 systemd-networkd[1252]: lxc_health: Gained IPv6LL Apr 25 00:26:23.364033 kubelet[2671]: E0425 00:26:23.363477 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:23.649686 kubelet[2671]: E0425 00:26:23.649559 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:24.651387 kubelet[2671]: E0425 00:26:24.651336 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:27.428741 kubelet[2671]: E0425 00:26:27.428674 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 25 00:26:27.675818 sshd[4495]: pam_unix(sshd:session): session closed for user core Apr 25 00:26:27.678395 systemd[1]: sshd@24-10.0.0.3:22-10.0.0.1:34080.service: Deactivated successfully. Apr 25 00:26:27.680100 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Apr 25 00:26:27.680125 systemd[1]: session-25.scope: Deactivated successfully. Apr 25 00:26:27.681004 systemd-logind[1554]: Removed session 25.