Apr 17 23:57:35.959929 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:57:35.959948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:57:35.959957 kernel: BIOS-provided physical RAM map: Apr 17 23:57:35.959963 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:57:35.959968 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 23:57:35.959973 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 23:57:35.959979 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 23:57:35.959985 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 23:57:35.959990 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 17 23:57:35.959995 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 17 23:57:35.960001 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 17 23:57:35.960007 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 17 23:57:35.960012 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 17 23:57:35.960017 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 17 23:57:35.960024 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 17 23:57:35.960029 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 23:57:35.960036 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 17 23:57:35.960042 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 17 23:57:35.960047 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 23:57:35.960052 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:57:35.960058 kernel: NX (Execute Disable) protection: active Apr 17 23:57:35.960063 kernel: APIC: Static calls initialized Apr 17 23:57:35.960069 kernel: efi: EFI v2.7 by EDK II Apr 17 23:57:35.960074 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 17 23:57:35.960079 kernel: SMBIOS 2.8 present. Apr 17 23:57:35.960085 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 17 23:57:35.960090 kernel: Hypervisor detected: KVM Apr 17 23:57:35.960097 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:57:35.960103 kernel: kvm-clock: using sched offset of 6949037616 cycles Apr 17 23:57:35.960108 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:57:35.960114 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:57:35.960120 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:57:35.960126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:57:35.960132 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 17 23:57:35.960137 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:57:35.960143 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:57:35.960150 kernel: Using GB pages for direct mapping Apr 17 23:57:35.960156 kernel: Secure boot disabled Apr 17 23:57:35.960162 kernel: ACPI: Early table checksum verification disabled Apr 17 23:57:35.960167 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 23:57:35.960175 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:57:35.960182 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:57:35.960187 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:57:35.960195 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 23:57:35.960201 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:57:35.960207 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:57:35.960212 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:57:35.960217 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:57:35.960222 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:57:35.960227 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 23:57:35.960233 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 23:57:35.960238 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 23:57:35.960243 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 23:57:35.960248 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 23:57:35.960253 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 23:57:35.960257 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 23:57:35.960262 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 23:57:35.960267 kernel: No NUMA configuration found Apr 17 23:57:35.960272 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 17 23:57:35.960278 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 17 23:57:35.960283 kernel: Zone ranges: Apr 17 23:57:35.960288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:57:35.960293 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 17 23:57:35.960298 kernel: Normal empty Apr 17 23:57:35.960303 kernel: Movable zone start for each node Apr 17 23:57:35.960308 kernel: Early memory node ranges Apr 17 23:57:35.960313 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:57:35.960318 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 23:57:35.960323 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 23:57:35.960329 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 17 23:57:35.960334 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 17 23:57:35.960339 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 17 23:57:35.960343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 17 23:57:35.960348 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:57:35.960353 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:57:35.960358 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 23:57:35.960363 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:57:35.960368 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 17 23:57:35.960375 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:57:35.960379 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 17 23:57:35.960384 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:57:35.960389 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:57:35.960394 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:57:35.960399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:57:35.960404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:57:35.960409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:57:35.960414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:57:35.960419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:57:35.960425 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:57:35.960430 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:57:35.960435 kernel: TSC deadline timer available Apr 17 23:57:35.960440 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:57:35.960445 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:57:35.960450 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:57:35.960454 kernel: kvm-guest: setup PV sched yield Apr 17 23:57:35.960460 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:57:35.960465 kernel: Booting paravirtualized kernel on KVM Apr 17 23:57:35.960471 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:57:35.960476 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:57:35.960515 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:57:35.960521 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:57:35.960526 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:57:35.960530 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:57:35.960536 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:57:35.960541 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:57:35.960548 kernel: random: crng init done Apr 17 23:57:35.960553 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:57:35.960558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:57:35.960563 kernel: Fallback order for Node 0: 0 Apr 17 23:57:35.960568 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 17 23:57:35.960573 kernel: Policy zone: DMA32 Apr 17 23:57:35.960578 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:57:35.960583 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 167136K reserved, 0K cma-reserved) Apr 17 23:57:35.960588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:57:35.960595 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:57:35.960599 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:57:35.960604 kernel: Dynamic Preempt: voluntary Apr 17 23:57:35.960610 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:57:35.960620 kernel: rcu: RCU event tracing is enabled. Apr 17 23:57:35.960645 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:57:35.960650 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:57:35.960656 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:57:35.960661 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:57:35.960667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:57:35.960672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:57:35.960678 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:57:35.960685 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:57:35.960690 kernel: Console: colour dummy device 80x25 Apr 17 23:57:35.960695 kernel: printk: console [ttyS0] enabled Apr 17 23:57:35.960701 kernel: ACPI: Core revision 20230628 Apr 17 23:57:35.960706 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:57:35.960714 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:57:35.960719 kernel: x2apic enabled Apr 17 23:57:35.960725 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:57:35.960730 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:57:35.960736 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:57:35.960741 kernel: kvm-guest: setup PV IPIs Apr 17 23:57:35.960747 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:57:35.960752 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:57:35.960758 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:57:35.960765 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:57:35.960770 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:57:35.960776 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:57:35.960781 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:57:35.960787 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:57:35.960792 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:57:35.960798 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:57:35.960804 kernel: RETBleed: Vulnerable Apr 17 23:57:35.960809 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:57:35.960816 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:57:35.960821 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:57:35.960827 kernel: active return thunk: its_return_thunk Apr 17 23:57:35.960832 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:57:35.960838 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:57:35.960843 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:57:35.960848 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:57:35.960854 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:57:35.960859 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:57:35.960866 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:57:35.960871 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:57:35.960877 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:57:35.960882 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:57:35.960887 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:57:35.960893 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:57:35.960898 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:57:35.960904 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:57:35.960910 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:57:35.960916 kernel: landlock: Up and running. Apr 17 23:57:35.960921 kernel: SELinux: Initializing. Apr 17 23:57:35.960927 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:57:35.960932 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:57:35.960938 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:57:35.960943 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:57:35.960949 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:57:35.960955 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:57:35.960961 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:57:35.960967 kernel: signal: max sigframe size: 3632 Apr 17 23:57:35.960972 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:57:35.960978 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:57:35.960983 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:57:35.960989 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:57:35.960994 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:57:35.961000 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:57:35.961005 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:57:35.961012 kernel: smpboot: Max logical packages: 1 Apr 17 23:57:35.961017 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:57:35.961023 kernel: devtmpfs: initialized Apr 17 23:57:35.961028 kernel: x86/mm: Memory block size: 128MB Apr 17 23:57:35.961034 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 23:57:35.961039 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 23:57:35.961045 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 17 23:57:35.961050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 23:57:35.961056 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 23:57:35.961063 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:57:35.961068 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:57:35.961074 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:57:35.961079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:57:35.961085 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:57:35.961090 kernel: audit: type=2000 audit(1776470254.407:1): state=initialized audit_enabled=0 res=1 Apr 17 23:57:35.961096 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:57:35.961101 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:57:35.961107 kernel: cpuidle: using governor menu Apr 17 23:57:35.961113 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:57:35.961119 kernel: dca service started, version 1.12.1 Apr 17 23:57:35.961125 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:57:35.961130 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:57:35.961135 kernel: PCI: Using configuration type 1 for base access Apr 17 23:57:35.961141 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:57:35.961146 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:57:35.961152 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:57:35.961157 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:57:35.961164 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:57:35.961170 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:57:35.961175 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:57:35.961180 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:57:35.961186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:57:35.961191 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:57:35.961197 kernel: ACPI: Interpreter enabled Apr 17 23:57:35.961202 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:57:35.961208 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:57:35.961214 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:57:35.961220 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:57:35.961225 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:57:35.961231 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:57:35.961332 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:57:35.961393 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:57:35.961448 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:57:35.961456 kernel: PCI host bridge to bus 0000:00 Apr 17 23:57:35.961602 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:57:35.961677 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:57:35.961727 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:57:35.961776 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:57:35.961824 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:57:35.961871 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 17 23:57:35.961924 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:57:35.961989 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:57:35.962050 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:57:35.962105 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 17 23:57:35.962161 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 17 23:57:35.962215 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:57:35.962269 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 17 23:57:35.962326 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:57:35.962387 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:57:35.962443 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 17 23:57:35.962552 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 17 23:57:35.962613 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 17 23:57:35.962754 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:57:35.962814 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 17 23:57:35.962888 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 17 23:57:35.962946 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 17 23:57:35.963007 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:57:35.963081 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 17 23:57:35.963138 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 17 23:57:35.963193 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 17 23:57:35.963251 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 17 23:57:35.963310 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:57:35.963364 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:57:35.963423 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:57:35.963477 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 17 23:57:35.963568 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 17 23:57:35.963647 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:57:35.963710 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 17 23:57:35.963717 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:57:35.963723 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:57:35.963728 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:57:35.963734 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:57:35.963739 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:57:35.963745 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:57:35.963750 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:57:35.963758 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:57:35.963763 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:57:35.963769 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:57:35.963774 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:57:35.963779 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:57:35.963785 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:57:35.963791 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:57:35.963796 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:57:35.963802 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:57:35.963809 kernel: iommu: Default domain type: Translated Apr 17 23:57:35.963814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:57:35.963820 kernel: efivars: Registered efivars operations Apr 17 23:57:35.963825 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:57:35.963831 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:57:35.963836 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 23:57:35.963841 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 17 23:57:35.963847 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 17 23:57:35.963852 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 17 23:57:35.963908 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:57:35.963963 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:57:35.964018 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:57:35.964025 kernel: vgaarb: loaded Apr 17 23:57:35.964031 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:57:35.964037 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:57:35.964042 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:57:35.964048 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:57:35.964053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:57:35.964060 kernel: pnp: PnP ACPI init Apr 17 23:57:35.964121 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:57:35.964129 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:57:35.964135 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:57:35.964140 kernel: NET: Registered PF_INET protocol family Apr 17 23:57:35.964146 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:57:35.964152 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:57:35.964157 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:57:35.964165 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:57:35.964171 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:57:35.964176 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:57:35.964182 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:57:35.964187 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:57:35.964193 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:57:35.964199 kernel: NET: Registered PF_XDP protocol family Apr 17 23:57:35.964253 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 17 23:57:35.964308 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 17 23:57:35.964363 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:57:35.964414 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:57:35.964462 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:57:35.964570 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:57:35.964619 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:57:35.964690 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 17 23:57:35.964697 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:57:35.964703 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:57:35.964712 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:57:35.964717 kernel: Initialise system trusted keyrings Apr 17 23:57:35.964723 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:57:35.964728 kernel: Key type asymmetric registered Apr 17 23:57:35.964734 kernel: Asymmetric key parser 'x509' registered Apr 17 23:57:35.964739 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:57:35.964745 kernel: io scheduler mq-deadline registered Apr 17 23:57:35.964750 kernel: io scheduler kyber registered Apr 17 23:57:35.964756 kernel: io scheduler bfq registered Apr 17 23:57:35.964763 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:57:35.964769 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:57:35.964774 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:57:35.964780 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:57:35.964785 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:57:35.964791 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:57:35.964796 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:57:35.964801 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:57:35.964807 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:57:35.964901 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:57:35.964957 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:57:35.964964 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:57:35.965033 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:57:35 UTC (1776470255) Apr 17 23:57:35.965101 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 23:57:35.965108 kernel: intel_pstate: CPU model not supported Apr 17 23:57:35.965114 kernel: efifb: probing for efifb Apr 17 23:57:35.965119 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 17 23:57:35.965127 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 17 23:57:35.965133 kernel: efifb: scrolling: redraw Apr 17 23:57:35.965138 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 17 23:57:35.965144 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:57:35.965149 kernel: fb0: EFI VGA frame buffer device Apr 17 23:57:35.965167 kernel: pstore: Using crash dump compression: deflate Apr 17 23:57:35.965174 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:57:35.965180 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:57:35.965186 kernel: Segment Routing with IPv6 Apr 17 23:57:35.965193 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:57:35.965198 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:57:35.965204 kernel: Key type dns_resolver registered Apr 17 23:57:35.965210 kernel: IPI shorthand broadcast: enabled Apr 17 23:57:35.965215 kernel: sched_clock: Marking stable (896019152, 268820244)->(1251527232, -86687836) Apr 17 23:57:35.965221 kernel: registered taskstats version 1 Apr 17 23:57:35.965226 kernel: Loading compiled-in X.509 certificates Apr 17 23:57:35.965232 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:57:35.965237 kernel: Key type .fscrypt registered Apr 17 23:57:35.965244 kernel: Key type fscrypt-provisioning registered Apr 17 23:57:35.965250 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:57:35.965255 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:57:35.965261 kernel: ima: No architecture policies found Apr 17 23:57:35.965266 kernel: clk: Disabling unused clocks Apr 17 23:57:35.965272 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:57:35.965277 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:57:35.965283 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:57:35.965289 kernel: Run /init as init process Apr 17 23:57:35.965295 kernel: with arguments: Apr 17 23:57:35.965301 kernel: /init Apr 17 23:57:35.965307 kernel: with environment: Apr 17 23:57:35.965312 kernel: HOME=/ Apr 17 23:57:35.965318 kernel: TERM=linux Apr 17 23:57:35.965325 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:57:35.965333 systemd[1]: Detected virtualization kvm. Apr 17 23:57:35.965340 systemd[1]: Detected architecture x86-64. Apr 17 23:57:35.965346 systemd[1]: Running in initrd. Apr 17 23:57:35.965352 systemd[1]: No hostname configured, using default hostname. Apr 17 23:57:35.965358 systemd[1]: Hostname set to . Apr 17 23:57:35.965364 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:57:35.965372 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:57:35.965378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:57:35.965383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:57:35.965390 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:57:35.965396 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:57:35.965402 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:57:35.965408 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:57:35.965416 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:57:35.965424 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:57:35.965430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:57:35.965436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:57:35.965442 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:57:35.965448 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:57:35.965454 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:57:35.965460 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:57:35.965467 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:57:35.965473 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:57:35.965479 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:57:35.965518 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:57:35.965524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:57:35.965530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:57:35.965536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:57:35.965542 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:57:35.965548 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:57:35.965556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:57:35.965562 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:57:35.965568 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:57:35.965574 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:57:35.965580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:57:35.965586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:57:35.965593 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:57:35.965612 systemd-journald[194]: Collecting audit messages is disabled. Apr 17 23:57:35.965647 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:57:35.965654 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:57:35.965663 systemd-journald[194]: Journal started Apr 17 23:57:35.965678 systemd-journald[194]: Runtime Journal (/run/log/journal/45462eb1e2f04b6d90ec7ee7cf3edb64) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:57:35.971831 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:57:35.972377 systemd-modules-load[195]: Inserted module 'overlay' Apr 17 23:57:35.983693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:57:35.989203 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:57:35.994001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:35.995229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:57:36.006205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:57:36.011242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:57:36.013841 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:57:36.016920 kernel: Bridge firewalling registered Apr 17 23:57:36.014671 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 17 23:57:36.018144 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:57:36.018342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:57:36.027596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:57:36.035810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:57:36.037016 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:57:36.038437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:57:36.054320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:57:36.065779 systemd-resolved[224]: Positive Trust Anchors: Apr 17 23:57:36.065805 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:57:36.065830 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:57:36.067719 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:57:36.067760 systemd-resolved[224]: Defaulting to hostname 'linux'. Apr 17 23:57:36.069727 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:57:36.070423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:57:36.101391 dracut-cmdline[230]: dracut-dracut-053 Apr 17 23:57:36.104455 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:57:36.175556 kernel: SCSI subsystem initialized Apr 17 23:57:36.183583 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:57:36.194544 kernel: iscsi: registered transport (tcp) Apr 17 23:57:36.213611 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:57:36.213720 kernel: QLogic iSCSI HBA Driver Apr 17 23:57:36.252900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:57:36.270773 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:57:36.294529 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:57:36.294594 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:57:36.296523 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:57:36.337592 kernel: raid6: avx512x4 gen() 28376 MB/s Apr 17 23:57:36.354565 kernel: raid6: avx512x2 gen() 43044 MB/s Apr 17 23:57:36.371572 kernel: raid6: avx512x1 gen() 44034 MB/s Apr 17 23:57:36.388561 kernel: raid6: avx2x4 gen() 36375 MB/s Apr 17 23:57:36.405563 kernel: raid6: avx2x2 gen() 36188 MB/s Apr 17 23:57:36.424052 kernel: raid6: avx2x1 gen() 27791 MB/s Apr 17 23:57:36.424103 kernel: raid6: using algorithm avx512x1 gen() 44034 MB/s Apr 17 23:57:36.442615 kernel: raid6: .... xor() 28505 MB/s, rmw enabled Apr 17 23:57:36.442683 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:57:36.461557 kernel: xor: automatically using best checksumming function avx Apr 17 23:57:36.595575 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:57:36.605934 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:57:36.621687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:57:36.632716 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 17 23:57:36.635396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:57:36.636918 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:57:36.653193 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 17 23:57:36.676841 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:57:36.685709 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:57:36.718840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:57:36.729709 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:57:36.737327 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:57:36.740194 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:57:36.744002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:57:36.747517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:57:36.759724 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:57:36.767556 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:57:36.770000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:57:36.771203 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:57:36.776810 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:57:36.780781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:57:36.781129 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:57:36.787312 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:57:36.789391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:57:36.804732 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:57:36.804750 kernel: GPT:9289727 != 19775487 Apr 17 23:57:36.804758 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:57:36.804765 kernel: GPT:9289727 != 19775487 Apr 17 23:57:36.804771 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:57:36.804782 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:57:36.789664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:36.794118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:57:36.815525 kernel: libata version 3.00 loaded. Apr 17 23:57:36.811797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:57:36.818543 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:57:36.818683 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:57:36.822656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:57:36.828088 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:57:36.828201 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:57:36.828274 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:57:36.822739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:36.832364 kernel: scsi host0: ahci Apr 17 23:57:36.832522 kernel: scsi host1: ahci Apr 17 23:57:36.834889 kernel: scsi host2: ahci Apr 17 23:57:36.834994 kernel: scsi host3: ahci Apr 17 23:57:36.835951 kernel: scsi host4: ahci Apr 17 23:57:36.836139 kernel: AES CTR mode by8 optimization enabled Apr 17 23:57:36.837530 kernel: scsi host5: ahci Apr 17 23:57:36.842004 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 17 23:57:36.842027 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 17 23:57:36.842035 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (473) Apr 17 23:57:36.842042 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 17 23:57:36.849815 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 17 23:57:36.849842 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 17 23:57:36.849853 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 17 23:57:36.849861 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Apr 17 23:57:36.854771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:57:36.861587 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:57:36.867224 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:57:36.868109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:36.876783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:57:36.880090 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:57:36.884250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:57:36.904759 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:57:36.909340 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:57:36.914693 disk-uuid[559]: Primary Header is updated. Apr 17 23:57:36.914693 disk-uuid[559]: Secondary Entries is updated. Apr 17 23:57:36.914693 disk-uuid[559]: Secondary Header is updated. Apr 17 23:57:36.918478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:57:36.921588 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:57:36.929826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:57:37.156538 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:57:37.156614 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:57:37.165548 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:57:37.165615 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:57:37.167584 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:57:37.170548 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:57:37.170568 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:57:37.172455 kernel: ata3.00: applying bridge limits Apr 17 23:57:37.174573 kernel: ata3.00: configured for UDMA/100 Apr 17 23:57:37.176584 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:57:37.224779 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:57:37.225008 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:57:37.240563 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:57:37.924273 disk-uuid[562]: The operation has completed successfully. Apr 17 23:57:37.926954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:57:37.944020 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:57:37.944133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:57:37.964954 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:57:37.971325 sh[595]: Success Apr 17 23:57:37.984574 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:57:38.014429 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:57:38.028891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:57:38.030723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:57:38.046966 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:57:38.046999 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:57:38.047010 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:57:38.050315 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:57:38.050335 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:57:38.056972 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:57:38.058994 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:57:38.064625 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:57:38.065853 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:57:38.077558 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:57:38.077576 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:57:38.077584 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:57:38.081532 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:57:38.088670 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:57:38.091873 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:57:38.098167 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:57:38.107725 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:57:38.154828 ignition[693]: Ignition 2.19.0 Apr 17 23:57:38.154836 ignition[693]: Stage: fetch-offline Apr 17 23:57:38.154867 ignition[693]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:57:38.154874 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:57:38.155016 ignition[693]: parsed url from cmdline: "" Apr 17 23:57:38.155018 ignition[693]: no config URL provided Apr 17 23:57:38.155022 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:57:38.155040 ignition[693]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:57:38.155084 ignition[693]: op(1): [started] loading QEMU firmware config module Apr 17 23:57:38.155088 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:57:38.164551 ignition[693]: op(1): [finished] loading QEMU firmware config module Apr 17 23:57:38.178432 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:57:38.196707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:57:38.216148 systemd-networkd[783]: lo: Link UP Apr 17 23:57:38.216173 systemd-networkd[783]: lo: Gained carrier Apr 17 23:57:38.217199 systemd-networkd[783]: Enumeration completed Apr 17 23:57:38.217255 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:57:38.218412 systemd[1]: Reached target network.target - Network. Apr 17 23:57:38.218955 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:57:38.218957 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:57:38.219980 systemd-networkd[783]: eth0: Link UP Apr 17 23:57:38.219982 systemd-networkd[783]: eth0: Gained carrier Apr 17 23:57:38.219988 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:57:38.241573 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:57:38.334196 ignition[693]: parsing config with SHA512: a73d90001720568e2aa3032e48d425c5db70315226947472c316fa08bbdd9f85e0e6d46efba0a3c6127f88253e66305997eefa200dc963c27fdf7b81090194fe Apr 17 23:57:38.338593 unknown[693]: fetched base config from "system" Apr 17 23:57:38.338603 unknown[693]: fetched user config from "qemu" Apr 17 23:57:38.339013 ignition[693]: fetch-offline: fetch-offline passed Apr 17 23:57:38.339064 ignition[693]: Ignition finished successfully Apr 17 23:57:38.345919 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:57:38.348131 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:57:38.373789 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:57:38.391517 ignition[787]: Ignition 2.19.0 Apr 17 23:57:38.391534 ignition[787]: Stage: kargs Apr 17 23:57:38.391700 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:57:38.391707 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:57:38.392314 ignition[787]: kargs: kargs passed Apr 17 23:57:38.392344 ignition[787]: Ignition finished successfully Apr 17 23:57:38.399201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:57:38.413810 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:57:38.437217 ignition[796]: Ignition 2.19.0 Apr 17 23:57:38.437255 ignition[796]: Stage: disks Apr 17 23:57:38.437386 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:57:38.437392 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:57:38.444975 ignition[796]: disks: disks passed Apr 17 23:57:38.445018 ignition[796]: Ignition finished successfully Apr 17 23:57:38.454605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:57:38.484344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:57:38.485428 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:57:38.489162 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:57:38.494197 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:57:38.497467 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:57:38.514707 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:57:38.529832 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:57:38.534218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:57:38.535675 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:57:38.623524 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:57:38.624016 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:57:38.627847 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:57:38.650723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:57:38.655289 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:57:38.659849 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Apr 17 23:57:38.659023 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:57:38.670569 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:57:38.670596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:57:38.670613 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:57:38.659072 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:57:38.675873 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:57:38.659096 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:57:38.676839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:57:38.696712 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:57:38.698780 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:57:38.736812 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:57:38.741798 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:57:38.746779 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:57:38.751186 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:57:38.821325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:57:38.830613 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:57:38.833134 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:57:38.841537 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:57:38.853006 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:57:38.859300 ignition[928]: INFO : Ignition 2.19.0 Apr 17 23:57:38.859300 ignition[928]: INFO : Stage: mount Apr 17 23:57:38.861428 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:57:38.861428 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:57:38.861428 ignition[928]: INFO : mount: mount passed Apr 17 23:57:38.861428 ignition[928]: INFO : Ignition finished successfully Apr 17 23:57:38.861239 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:57:38.874618 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:57:39.044769 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:57:39.056771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:57:39.063559 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Apr 17 23:57:39.067033 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:57:39.067062 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:57:39.067071 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:57:39.072624 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:57:39.072962 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:57:39.092735 ignition[957]: INFO : Ignition 2.19.0 Apr 17 23:57:39.092735 ignition[957]: INFO : Stage: files Apr 17 23:57:39.092735 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:57:39.092735 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:57:39.099298 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:57:39.101878 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:57:39.101878 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:57:39.107010 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:57:39.107010 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:57:39.107010 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:57:39.107010 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:57:39.107010 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:57:39.105073 unknown[957]: wrote ssh authorized keys file for user: core Apr 17 23:57:39.168122 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:57:39.273739 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:57:39.273739 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:57:39.280309 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:57:39.331343 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:57:39.420375 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:57:39.420375 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:57:39.427128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:57:39.663268 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:57:39.780123 systemd-networkd[783]: eth0: Gained IPv6LL Apr 17 23:57:39.934477 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:57:39.934477 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 17 23:57:39.940997 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:57:39.965336 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:57:39.965336 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:57:39.965336 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:57:39.965336 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:57:39.965336 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:57:39.965336 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:57:39.965336 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:57:39.965336 ignition[957]: INFO : files: files passed Apr 17 23:57:39.965336 ignition[957]: INFO : Ignition finished successfully Apr 17 23:57:39.976873 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:57:39.995706 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:57:39.999803 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:57:40.001016 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:57:40.001096 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:57:40.014242 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:57:40.018530 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:57:40.021345 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:57:40.024024 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:57:40.027790 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:57:40.028632 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:57:40.047626 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:57:40.069211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:57:40.069324 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:57:40.072382 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:57:40.077821 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:57:40.079738 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:57:40.083768 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:57:40.105091 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:57:40.121700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:57:40.134014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:57:40.136598 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:57:40.139287 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:57:40.143373 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:57:40.143465 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:57:40.149879 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:57:40.153637 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:57:40.156908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:57:40.160262 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:57:40.161196 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:57:40.166564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:57:40.170274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:57:40.174074 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:57:40.177941 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:57:40.181385 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:57:40.185113 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:57:40.185225 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:57:40.190453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:57:40.191459 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:57:40.196990 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:57:40.197294 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:57:40.201029 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:57:40.201150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:57:40.208062 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:57:40.208165 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:57:40.211956 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:57:40.215244 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:57:40.218124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:57:40.220104 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:57:40.226640 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:57:40.230830 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:57:40.230952 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:57:40.234054 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:57:40.234176 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:57:40.237551 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:57:40.237722 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:57:40.241315 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:57:40.241447 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:57:40.258752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:57:40.262973 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:57:40.264616 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:57:40.264740 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:57:40.274886 ignition[1012]: INFO : Ignition 2.19.0 Apr 17 23:57:40.274886 ignition[1012]: INFO : Stage: umount Apr 17 23:57:40.274886 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:57:40.274886 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:57:40.274886 ignition[1012]: INFO : umount: umount passed Apr 17 23:57:40.274886 ignition[1012]: INFO : Ignition finished successfully Apr 17 23:57:40.268729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:57:40.269080 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:57:40.277152 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:57:40.277281 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:57:40.280630 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:57:40.282218 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:57:40.282302 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:57:40.283751 systemd[1]: Stopped target network.target - Network. Apr 17 23:57:40.284029 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:57:40.284093 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:57:40.291221 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:57:40.291276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:57:40.294414 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:57:40.294457 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:57:40.298334 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:57:40.298383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:57:40.304186 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:57:40.310185 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:57:40.315046 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:57:40.315171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:57:40.317845 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:57:40.317882 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:57:40.333566 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 17 23:57:40.336763 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:57:40.336884 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:57:40.338744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:57:40.338768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:57:40.382720 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:57:40.384898 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:57:40.384952 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:57:40.387379 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:57:40.387415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:57:40.395239 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:57:40.396980 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:57:40.404517 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:57:40.409189 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:57:40.411052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:57:40.425823 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:57:40.425993 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:57:40.432630 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:57:40.432786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:57:40.436871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:57:40.436926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:57:40.439272 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:57:40.439301 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:57:40.444337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:57:40.444376 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:57:40.450853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:57:40.450907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:57:40.454361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:57:40.454409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:57:40.459805 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:57:40.459848 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:57:40.476721 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:57:40.477523 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:57:40.477569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:57:40.481859 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:57:40.481890 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:57:40.485441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:57:40.485470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:57:40.490262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:57:40.490302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:40.506104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:57:40.506210 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:57:40.508587 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:57:40.513350 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:57:40.526109 systemd[1]: Switching root. Apr 17 23:57:40.553247 systemd-journald[194]: Journal stopped Apr 17 23:57:41.351304 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 17 23:57:41.351352 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:57:41.351366 kernel: SELinux: policy capability open_perms=1 Apr 17 23:57:41.351377 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:57:41.351384 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:57:41.351394 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:57:41.351402 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:57:41.351408 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:57:41.351417 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:57:41.351424 kernel: audit: type=1403 audit(1776470260.711:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:57:41.351435 systemd[1]: Successfully loaded SELinux policy in 34.295ms. Apr 17 23:57:41.351448 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.391ms. Apr 17 23:57:41.351456 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:57:41.351464 systemd[1]: Detected virtualization kvm. Apr 17 23:57:41.351474 systemd[1]: Detected architecture x86-64. Apr 17 23:57:41.351530 systemd[1]: Detected first boot. Apr 17 23:57:41.351540 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:57:41.351548 zram_generator::config[1058]: No configuration found. Apr 17 23:57:41.351559 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:57:41.351571 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:57:41.351578 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:57:41.351589 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:57:41.351600 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:57:41.351608 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:57:41.351616 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:57:41.351623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:57:41.351631 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:57:41.351639 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:57:41.351647 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:57:41.351680 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:57:41.351690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:57:41.351698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:57:41.351706 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:57:41.351714 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:57:41.351722 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:57:41.351730 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:57:41.351737 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:57:41.351745 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:57:41.351753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:57:41.351763 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:57:41.351771 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:57:41.351779 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:57:41.351787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:57:41.351795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:57:41.351802 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:57:41.351810 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:57:41.351817 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:57:41.351826 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:57:41.351834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:57:41.351843 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:57:41.351850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:57:41.351858 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:57:41.351866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:57:41.351873 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:57:41.351881 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:57:41.351889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:41.351899 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:57:41.351907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:57:41.351914 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:57:41.351922 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:57:41.351932 systemd[1]: Reached target machines.target - Containers. Apr 17 23:57:41.351939 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:57:41.351947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:57:41.351955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:57:41.351964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:57:41.351972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:57:41.351979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:57:41.351987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:57:41.351994 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:57:41.352002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:57:41.352010 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:57:41.352017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:57:41.352025 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:57:41.352035 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:57:41.352043 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:57:41.352050 kernel: fuse: init (API version 7.39) Apr 17 23:57:41.352057 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:57:41.352064 kernel: loop: module loaded Apr 17 23:57:41.352072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:57:41.352079 kernel: ACPI: bus type drm_connector registered Apr 17 23:57:41.352087 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:57:41.352095 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:57:41.352105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:57:41.352113 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:57:41.352121 systemd[1]: Stopped verity-setup.service. Apr 17 23:57:41.352129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:41.352148 systemd-journald[1132]: Collecting audit messages is disabled. Apr 17 23:57:41.352166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:57:41.352174 systemd-journald[1132]: Journal started Apr 17 23:57:41.352193 systemd-journald[1132]: Runtime Journal (/run/log/journal/45462eb1e2f04b6d90ec7ee7cf3edb64) is 6.0M, max 48.3M, 42.2M free. Apr 17 23:57:41.041871 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:57:41.060818 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:57:41.061173 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:57:41.355969 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:57:41.356403 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:57:41.358950 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:57:41.360981 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:57:41.363044 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:57:41.365178 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:57:41.367112 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:57:41.369598 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:57:41.372070 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:57:41.372242 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:57:41.374951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:57:41.375106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:57:41.377347 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:57:41.377531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:57:41.379651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:57:41.379852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:57:41.382273 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:57:41.382475 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:57:41.384747 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:57:41.384928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:57:41.387212 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:57:41.389431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:57:41.391926 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:57:41.394268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:57:41.404040 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:57:41.413774 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:57:41.416631 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:57:41.418685 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:57:41.418721 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:57:41.421344 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:57:41.424655 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:57:41.427689 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:57:41.429776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:57:41.431024 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:57:41.433646 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:57:41.435776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:57:41.436421 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:57:41.437150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:57:41.437853 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:57:41.443081 systemd-journald[1132]: Time spent on flushing to /var/log/journal/45462eb1e2f04b6d90ec7ee7cf3edb64 is 16.310ms for 999 entries. Apr 17 23:57:41.443081 systemd-journald[1132]: System Journal (/var/log/journal/45462eb1e2f04b6d90ec7ee7cf3edb64) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:57:41.479801 systemd-journald[1132]: Received client request to flush runtime journal. Apr 17 23:57:41.479835 kernel: loop0: detected capacity change from 0 to 217752 Apr 17 23:57:41.445173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:57:41.448245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:57:41.451288 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:57:41.454329 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:57:41.456847 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:57:41.459294 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:57:41.467146 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:57:41.469355 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:57:41.472412 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:57:41.480753 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:57:41.483772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:57:41.486136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:57:41.498044 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Apr 17 23:57:41.498055 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Apr 17 23:57:41.499550 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:57:41.502985 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:57:41.513883 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:57:41.516146 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:57:41.516595 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:57:41.527547 kernel: loop1: detected capacity change from 0 to 140768 Apr 17 23:57:41.546861 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:57:41.557186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:57:41.560523 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:57:41.569022 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 17 23:57:41.569051 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 17 23:57:41.572093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:57:41.589552 kernel: loop3: detected capacity change from 0 to 217752 Apr 17 23:57:41.598562 kernel: loop4: detected capacity change from 0 to 140768 Apr 17 23:57:41.612532 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:57:41.622412 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:57:41.622772 (sd-merge)[1201]: Merged extensions into '/usr'. Apr 17 23:57:41.626943 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:57:41.626965 systemd[1]: Reloading... Apr 17 23:57:41.665546 zram_generator::config[1226]: No configuration found. Apr 17 23:57:41.708315 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:57:41.745094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:57:41.775310 systemd[1]: Reloading finished in 148 ms. Apr 17 23:57:41.807231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:57:41.809740 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:57:41.823802 systemd[1]: Starting ensure-sysext.service... Apr 17 23:57:41.828732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:57:41.832719 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:57:41.832727 systemd[1]: Reloading... Apr 17 23:57:41.843046 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:57:41.843260 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:57:41.843803 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:57:41.843982 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 17 23:57:41.844037 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 17 23:57:41.845816 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:57:41.845833 systemd-tmpfiles[1267]: Skipping /boot Apr 17 23:57:41.851034 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:57:41.851067 systemd-tmpfiles[1267]: Skipping /boot Apr 17 23:57:41.874571 zram_generator::config[1294]: No configuration found. Apr 17 23:57:41.946213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:57:41.975164 systemd[1]: Reloading finished in 142 ms. Apr 17 23:57:41.990255 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:57:42.003094 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:57:42.011625 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:57:42.014878 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:57:42.017988 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:57:42.021445 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:57:42.026733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:57:42.033852 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:57:42.039012 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:42.039176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:57:42.040546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:57:42.043934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:57:42.048368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:57:42.050808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:57:42.052595 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:57:42.054658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:42.055075 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Apr 17 23:57:42.056194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:57:42.056344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:57:42.059180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:57:42.059347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:57:42.063139 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:57:42.066304 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:57:42.066635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:57:42.072029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:57:42.072169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:57:42.080002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:57:42.083701 augenrules[1363]: No rules Apr 17 23:57:42.084439 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:57:42.088424 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:57:42.091992 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:57:42.094896 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:57:42.097446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:57:42.100164 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:57:42.108433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:42.108610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:57:42.112710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:57:42.115633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:57:42.118734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:57:42.120692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:57:42.123702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:57:42.125849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:57:42.125924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:42.127644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:57:42.127776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:57:42.130206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:57:42.130317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:57:42.141603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:42.141762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:57:42.147997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:57:42.152060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:57:42.158642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:57:42.161076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:57:42.161179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:57:42.161234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:57:42.161909 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:57:42.162032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:57:42.164782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:57:42.165001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:57:42.168741 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:57:42.168853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:57:42.171302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:57:42.171441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:57:42.176827 systemd-resolved[1338]: Positive Trust Anchors: Apr 17 23:57:42.177002 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:57:42.177054 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:57:42.177094 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:57:42.177649 systemd[1]: Finished ensure-sysext.service. Apr 17 23:57:42.180428 systemd-resolved[1338]: Defaulting to hostname 'linux'. Apr 17 23:57:42.183541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1376) Apr 17 23:57:42.188762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:57:42.198814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:57:42.201336 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:57:42.201423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:57:42.207548 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:57:42.213734 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:57:42.217533 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:57:42.218632 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:57:42.222630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:57:42.223796 systemd-networkd[1396]: lo: Link UP Apr 17 23:57:42.223802 systemd-networkd[1396]: lo: Gained carrier Apr 17 23:57:42.225246 systemd-networkd[1396]: Enumeration completed Apr 17 23:57:42.225342 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:57:42.226685 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:57:42.226727 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:57:42.227384 systemd-networkd[1396]: eth0: Link UP Apr 17 23:57:42.227388 systemd-networkd[1396]: eth0: Gained carrier Apr 17 23:57:42.227398 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:57:42.228012 systemd[1]: Reached target network.target - Network. Apr 17 23:57:42.233652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:57:42.241546 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:57:42.242281 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:57:42.253532 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 17 23:57:42.257413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:57:42.257882 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:57:42.257913 systemd-timesyncd[1419]: Initial clock synchronization to Fri 2026-04-17 23:57:42.237653 UTC. Apr 17 23:57:42.258552 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 23:57:42.258723 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:57:42.263200 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:57:42.263380 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:57:42.263907 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:57:42.281616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:57:42.291588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:57:42.292191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:42.292519 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:57:42.302872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:57:42.360269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:57:42.421414 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:57:42.434720 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:57:42.443079 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:57:42.472056 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:57:42.475133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:57:42.477179 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:57:42.479250 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:57:42.481545 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:57:42.484019 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:57:42.486120 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:57:42.488984 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:57:42.491357 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:57:42.491393 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:57:42.493349 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:57:42.495786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:57:42.499019 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:57:42.510317 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:57:42.513465 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:57:42.516012 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:57:42.517091 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:57:42.520148 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:57:42.522005 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:57:42.522042 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:57:42.522365 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:57:42.523040 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:57:42.525798 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:57:42.530603 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:57:42.533925 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:57:42.535172 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:57:42.536397 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:57:42.540614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:57:42.543424 jq[1446]: false Apr 17 23:57:42.545613 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:57:42.548744 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:57:42.548781 dbus-daemon[1445]: [system] SELinux support is enabled Apr 17 23:57:42.553573 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:57:42.555782 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:57:42.556046 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:57:42.557724 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:57:42.558070 extend-filesystems[1447]: Found loop3 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found loop4 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found loop5 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found sr0 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda1 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda2 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda3 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found usr Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda4 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda6 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda7 Apr 17 23:57:42.560628 extend-filesystems[1447]: Found vda9 Apr 17 23:57:42.560628 extend-filesystems[1447]: Checking size of /dev/vda9 Apr 17 23:57:42.594423 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:57:42.594450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1373) Apr 17 23:57:42.594463 extend-filesystems[1447]: Resized partition /dev/vda9 Apr 17 23:57:42.561973 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:57:42.596936 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:57:42.599251 update_engine[1460]: I20260417 23:57:42.590926 1460 main.cc:92] Flatcar Update Engine starting Apr 17 23:57:42.599251 update_engine[1460]: I20260417 23:57:42.591892 1460 update_check_scheduler.cc:74] Next update check in 9m8s Apr 17 23:57:42.568701 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:57:42.574975 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:57:42.604823 jq[1461]: true Apr 17 23:57:42.578745 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:57:42.578882 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:57:42.605010 jq[1471]: true Apr 17 23:57:42.579058 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:57:42.579188 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:57:42.583983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:57:42.584089 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:57:42.599284 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:57:42.599302 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:57:42.602007 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:57:42.602021 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:57:42.613780 tar[1470]: linux-amd64/LICENSE Apr 17 23:57:42.613928 tar[1470]: linux-amd64/helm Apr 17 23:57:42.614180 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:57:42.620365 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:57:42.622555 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:57:42.626293 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:57:42.643876 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:57:42.644109 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:57:42.645091 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:57:42.645091 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:57:42.645091 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:57:42.655778 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Apr 17 23:57:42.645802 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:57:42.645931 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:57:42.648628 systemd-logind[1458]: New seat seat0. Apr 17 23:57:42.657161 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:57:42.670201 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:57:42.670405 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:57:42.671193 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:57:42.675988 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:57:42.799381 containerd[1484]: time="2026-04-17T23:57:42.799250481Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:57:42.818571 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:57:42.818813 containerd[1484]: time="2026-04-17T23:57:42.817028164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.819454 containerd[1484]: time="2026-04-17T23:57:42.819382198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:57:42.819454 containerd[1484]: time="2026-04-17T23:57:42.819438196Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:57:42.819454 containerd[1484]: time="2026-04-17T23:57:42.819456581Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:57:42.819747 containerd[1484]: time="2026-04-17T23:57:42.819715976Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:57:42.819798 containerd[1484]: time="2026-04-17T23:57:42.819771600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.819871 containerd[1484]: time="2026-04-17T23:57:42.819848569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:57:42.819913 containerd[1484]: time="2026-04-17T23:57:42.819873223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820067 containerd[1484]: time="2026-04-17T23:57:42.820027580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820067 containerd[1484]: time="2026-04-17T23:57:42.820058658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820111 containerd[1484]: time="2026-04-17T23:57:42.820068696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820111 containerd[1484]: time="2026-04-17T23:57:42.820075623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820137 containerd[1484]: time="2026-04-17T23:57:42.820124001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820310 containerd[1484]: time="2026-04-17T23:57:42.820275351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820391 containerd[1484]: time="2026-04-17T23:57:42.820368743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:57:42.820408 containerd[1484]: time="2026-04-17T23:57:42.820392424Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:57:42.820469 containerd[1484]: time="2026-04-17T23:57:42.820440383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:57:42.820774 containerd[1484]: time="2026-04-17T23:57:42.820532996Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826252645Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826300830Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826315888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826328167Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826338961Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826439052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826650306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826742625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826754401Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826763286Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826772456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826781546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826790504Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.828524 containerd[1484]: time="2026-04-17T23:57:42.826799702Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826810516Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826819588Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826829535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826837739Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826852863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826864781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826877679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826886950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826896014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826906727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826915260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826926064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826934748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830740 containerd[1484]: time="2026-04-17T23:57:42.826944603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.828619 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.826952658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.826960805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.826970111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.826980348Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.826995845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827005604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827012845Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827043543Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827055332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827063619Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827072451Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827079162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827089647Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:57:42.830957 containerd[1484]: time="2026-04-17T23:57:42.827099843Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:57:42.831152 containerd[1484]: time="2026-04-17T23:57:42.827109875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.827308513Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.827351913Z" level=info msg="Connect containerd service" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.827378012Z" level=info msg="using legacy CRI server" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.827383347Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.827444422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.827915993Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828142202Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828169635Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828176306Z" level=info msg="Start subscribing containerd event" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828247019Z" level=info msg="Start recovering state" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828294064Z" level=info msg="Start event monitor" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828301737Z" level=info msg="Start snapshots syncer" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828308095Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828314192Z" level=info msg="Start streaming server" Apr 17 23:57:42.831166 containerd[1484]: time="2026-04-17T23:57:42.828362835Z" level=info msg="containerd successfully booted in 0.030128s" Apr 17 23:57:42.841590 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:57:42.849742 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:57:42.858301 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:57:42.858626 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:57:42.861801 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:57:42.874710 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:57:42.878090 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:57:42.880605 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:57:42.881404 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:57:43.051387 tar[1470]: linux-amd64/README.md Apr 17 23:57:43.065861 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:57:43.491931 systemd-networkd[1396]: eth0: Gained IPv6LL Apr 17 23:57:43.494414 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:57:43.497345 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:57:43.506838 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:57:43.510277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:43.513181 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:57:43.527214 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:57:43.527383 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:57:43.530027 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:57:43.532704 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:57:44.149327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:44.151886 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:57:44.152743 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:57:44.156919 systemd[1]: Startup finished in 1.024s (kernel) + 4.989s (initrd) + 3.478s (userspace) = 9.492s. Apr 17 23:57:44.508946 kubelet[1558]: E0417 23:57:44.508770 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:57:44.511123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:57:44.511250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:57:48.581854 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:57:48.582848 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:39856.service - OpenSSH per-connection server daemon (10.0.0.1:39856). Apr 17 23:57:48.621223 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 39856 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:48.622903 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:48.630525 systemd-logind[1458]: New session 1 of user core. Apr 17 23:57:48.631347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:57:48.638842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:57:48.648389 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:57:48.650528 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:57:48.656358 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:57:48.734181 systemd[1575]: Queued start job for default target default.target. Apr 17 23:57:48.745613 systemd[1575]: Created slice app.slice - User Application Slice. Apr 17 23:57:48.745668 systemd[1575]: Reached target paths.target - Paths. Apr 17 23:57:48.745680 systemd[1575]: Reached target timers.target - Timers. Apr 17 23:57:48.747198 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:57:48.758554 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:57:48.758657 systemd[1575]: Reached target sockets.target - Sockets. Apr 17 23:57:48.758668 systemd[1575]: Reached target basic.target - Basic System. Apr 17 23:57:48.758693 systemd[1575]: Reached target default.target - Main User Target. Apr 17 23:57:48.758714 systemd[1575]: Startup finished in 97ms. Apr 17 23:57:48.758908 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:57:48.760004 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:57:48.820831 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:39858.service - OpenSSH per-connection server daemon (10.0.0.1:39858). Apr 17 23:57:48.851153 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 39858 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:48.852200 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:48.856085 systemd-logind[1458]: New session 2 of user core. Apr 17 23:57:48.875865 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:57:48.928936 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:48.946624 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:39858.service: Deactivated successfully. Apr 17 23:57:48.947870 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:57:48.948928 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:57:48.950139 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:39860.service - OpenSSH per-connection server daemon (10.0.0.1:39860). Apr 17 23:57:48.951091 systemd-logind[1458]: Removed session 2. Apr 17 23:57:48.978287 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 39860 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:48.979312 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:48.982890 systemd-logind[1458]: New session 3 of user core. Apr 17 23:57:48.997747 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:57:49.048377 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:49.056306 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:39860.service: Deactivated successfully. Apr 17 23:57:49.057390 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:57:49.058366 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:57:49.059326 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:39870.service - OpenSSH per-connection server daemon (10.0.0.1:39870). Apr 17 23:57:49.059888 systemd-logind[1458]: Removed session 3. Apr 17 23:57:49.088095 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 39870 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:49.089804 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:49.094031 systemd-logind[1458]: New session 4 of user core. Apr 17 23:57:49.102642 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:57:49.158681 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:49.168900 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:39870.service: Deactivated successfully. Apr 17 23:57:49.170644 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:57:49.171940 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:57:49.187968 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:39880.service - OpenSSH per-connection server daemon (10.0.0.1:39880). Apr 17 23:57:49.189019 systemd-logind[1458]: Removed session 4. Apr 17 23:57:49.213881 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 39880 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:49.215273 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:49.219355 systemd-logind[1458]: New session 5 of user core. Apr 17 23:57:49.235963 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:57:49.292797 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:57:49.293049 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:57:49.311599 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 17 23:57:49.313557 sshd[1607]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:49.326201 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:39880.service: Deactivated successfully. Apr 17 23:57:49.327650 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:57:49.328694 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:57:49.339958 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:39890.service - OpenSSH per-connection server daemon (10.0.0.1:39890). Apr 17 23:57:49.340875 systemd-logind[1458]: Removed session 5. Apr 17 23:57:49.365233 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 39890 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:49.367038 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:49.371368 systemd-logind[1458]: New session 6 of user core. Apr 17 23:57:49.381715 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:57:49.434297 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:57:49.434571 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:57:49.438119 sudo[1619]: pam_unix(sudo:session): session closed for user root Apr 17 23:57:49.442594 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:57:49.442801 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:57:49.460971 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:57:49.462904 auditctl[1622]: No rules Apr 17 23:57:49.463249 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:57:49.463566 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:57:49.466292 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:57:49.495872 augenrules[1640]: No rules Apr 17 23:57:49.497153 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:57:49.498286 sudo[1618]: pam_unix(sudo:session): session closed for user root Apr 17 23:57:49.500223 sshd[1615]: pam_unix(sshd:session): session closed for user core Apr 17 23:57:49.512456 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:39890.service: Deactivated successfully. Apr 17 23:57:49.513844 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:57:49.515107 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:57:49.516161 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:34112.service - OpenSSH per-connection server daemon (10.0.0.1:34112). Apr 17 23:57:49.517302 systemd-logind[1458]: Removed session 6. Apr 17 23:57:49.544830 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 34112 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:57:49.545951 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:57:49.550712 systemd-logind[1458]: New session 7 of user core. Apr 17 23:57:49.567722 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:57:49.618920 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:57:49.619135 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:57:49.868783 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:57:49.868938 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:57:50.108314 dockerd[1669]: time="2026-04-17T23:57:50.108222146Z" level=info msg="Starting up" Apr 17 23:57:50.228030 dockerd[1669]: time="2026-04-17T23:57:50.227845355Z" level=info msg="Loading containers: start." Apr 17 23:57:50.337540 kernel: Initializing XFRM netlink socket Apr 17 23:57:50.406199 systemd-networkd[1396]: docker0: Link UP Apr 17 23:57:50.435393 dockerd[1669]: time="2026-04-17T23:57:50.435302227Z" level=info msg="Loading containers: done." Apr 17 23:57:50.449119 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck797478480-merged.mount: Deactivated successfully. Apr 17 23:57:50.450270 dockerd[1669]: time="2026-04-17T23:57:50.450212529Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:57:50.450391 dockerd[1669]: time="2026-04-17T23:57:50.450344962Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:57:50.450515 dockerd[1669]: time="2026-04-17T23:57:50.450453583Z" level=info msg="Daemon has completed initialization" Apr 17 23:57:50.490573 dockerd[1669]: time="2026-04-17T23:57:50.490404515Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:57:50.490865 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:57:50.860137 containerd[1484]: time="2026-04-17T23:57:50.860066810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:57:51.291846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457018096.mount: Deactivated successfully. Apr 17 23:57:51.992888 containerd[1484]: time="2026-04-17T23:57:51.992782921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:51.994315 containerd[1484]: time="2026-04-17T23:57:51.994234075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 17 23:57:51.995133 containerd[1484]: time="2026-04-17T23:57:51.995069735Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:51.997697 containerd[1484]: time="2026-04-17T23:57:51.997647217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:51.999516 containerd[1484]: time="2026-04-17T23:57:51.999437658Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.139327665s" Apr 17 23:57:51.999516 containerd[1484]: time="2026-04-17T23:57:51.999507245Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 23:57:52.000208 containerd[1484]: time="2026-04-17T23:57:52.000170114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:57:52.727936 containerd[1484]: time="2026-04-17T23:57:52.727855042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:52.728706 containerd[1484]: time="2026-04-17T23:57:52.728660009Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 17 23:57:52.729787 containerd[1484]: time="2026-04-17T23:57:52.729746845Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:52.734041 containerd[1484]: time="2026-04-17T23:57:52.734000825Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 733.781186ms" Apr 17 23:57:52.734041 containerd[1484]: time="2026-04-17T23:57:52.734043000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 23:57:52.735291 containerd[1484]: time="2026-04-17T23:57:52.735229599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:52.735689 containerd[1484]: time="2026-04-17T23:57:52.735644793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:57:53.724524 containerd[1484]: time="2026-04-17T23:57:53.724412810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:53.725533 containerd[1484]: time="2026-04-17T23:57:53.725451673Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 17 23:57:53.726902 containerd[1484]: time="2026-04-17T23:57:53.726836822Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:53.730202 containerd[1484]: time="2026-04-17T23:57:53.730133634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:53.731026 containerd[1484]: time="2026-04-17T23:57:53.730962538Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 995.279638ms" Apr 17 23:57:53.731026 containerd[1484]: time="2026-04-17T23:57:53.730999702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 23:57:53.731631 containerd[1484]: time="2026-04-17T23:57:53.731603813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:57:54.461533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794003910.mount: Deactivated successfully. Apr 17 23:57:54.519477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:57:54.526760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:54.629434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:54.632650 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:57:54.674811 kubelet[1898]: E0417 23:57:54.674657 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:57:54.678690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:57:54.678802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:57:54.720639 containerd[1484]: time="2026-04-17T23:57:54.720406678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:54.721359 containerd[1484]: time="2026-04-17T23:57:54.721315396Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 17 23:57:54.722150 containerd[1484]: time="2026-04-17T23:57:54.722102439Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:54.724185 containerd[1484]: time="2026-04-17T23:57:54.724088196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:54.724610 containerd[1484]: time="2026-04-17T23:57:54.724565034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 992.924965ms" Apr 17 23:57:54.724828 containerd[1484]: time="2026-04-17T23:57:54.724608855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 23:57:54.725342 containerd[1484]: time="2026-04-17T23:57:54.725269216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:57:55.135138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155659005.mount: Deactivated successfully. Apr 17 23:57:55.809086 containerd[1484]: time="2026-04-17T23:57:55.809006175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:55.810141 containerd[1484]: time="2026-04-17T23:57:55.810077103Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 17 23:57:55.811181 containerd[1484]: time="2026-04-17T23:57:55.811139545Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:55.813672 containerd[1484]: time="2026-04-17T23:57:55.813616280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:55.814650 containerd[1484]: time="2026-04-17T23:57:55.814579757Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.089289365s" Apr 17 23:57:55.814650 containerd[1484]: time="2026-04-17T23:57:55.814621349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 23:57:55.815309 containerd[1484]: time="2026-04-17T23:57:55.815206322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:57:56.189005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162869179.mount: Deactivated successfully. Apr 17 23:57:56.196971 containerd[1484]: time="2026-04-17T23:57:56.196875601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:56.197700 containerd[1484]: time="2026-04-17T23:57:56.197644077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 23:57:56.198897 containerd[1484]: time="2026-04-17T23:57:56.198828977Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:56.200833 containerd[1484]: time="2026-04-17T23:57:56.200764664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:56.201651 containerd[1484]: time="2026-04-17T23:57:56.201588921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 386.302458ms" Apr 17 23:57:56.201651 containerd[1484]: time="2026-04-17T23:57:56.201634465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:57:56.202098 containerd[1484]: time="2026-04-17T23:57:56.202068076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:57:56.661903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193838753.mount: Deactivated successfully. Apr 17 23:57:57.204452 containerd[1484]: time="2026-04-17T23:57:57.204343279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:57.205570 containerd[1484]: time="2026-04-17T23:57:57.205520307Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 17 23:57:57.207198 containerd[1484]: time="2026-04-17T23:57:57.207155200Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:57.210219 containerd[1484]: time="2026-04-17T23:57:57.209784729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:57:57.210784 containerd[1484]: time="2026-04-17T23:57:57.210749580Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.008646076s" Apr 17 23:57:57.210834 containerd[1484]: time="2026-04-17T23:57:57.210789787Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 23:57:58.431858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:58.438780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:58.460053 systemd[1]: Reloading requested from client PID 2058 ('systemctl') (unit session-7.scope)... Apr 17 23:57:58.460079 systemd[1]: Reloading... Apr 17 23:57:58.500676 zram_generator::config[2093]: No configuration found. Apr 17 23:57:58.609070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:57:58.656262 systemd[1]: Reloading finished in 195 ms. Apr 17 23:57:58.699274 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:57:58.699341 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:57:58.699669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:58.701445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:57:58.813600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:57:58.817403 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:57:58.858208 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:57:59.065670 kubelet[2146]: I0417 23:57:59.065572 2146 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:57:59.065670 kubelet[2146]: I0417 23:57:59.065624 2146 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:57:59.065670 kubelet[2146]: I0417 23:57:59.065637 2146 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:57:59.065670 kubelet[2146]: I0417 23:57:59.065642 2146 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:57:59.065890 kubelet[2146]: I0417 23:57:59.065872 2146 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:57:59.146714 kubelet[2146]: E0417 23:57:59.146617 2146 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:57:59.146982 kubelet[2146]: I0417 23:57:59.146951 2146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:57:59.151577 kubelet[2146]: E0417 23:57:59.151546 2146 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:57:59.151674 kubelet[2146]: I0417 23:57:59.151581 2146 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:57:59.155016 kubelet[2146]: I0417 23:57:59.154979 2146 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:57:59.156138 kubelet[2146]: I0417 23:57:59.156094 2146 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:57:59.156270 kubelet[2146]: I0417 23:57:59.156133 2146 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:57:59.156270 kubelet[2146]: I0417 23:57:59.156269 2146 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:57:59.156421 kubelet[2146]: I0417 23:57:59.156276 2146 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:57:59.156421 kubelet[2146]: I0417 23:57:59.156345 2146 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:57:59.157774 kubelet[2146]: I0417 23:57:59.157754 2146 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:57:59.158009 kubelet[2146]: I0417 23:57:59.157979 2146 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:57:59.158009 kubelet[2146]: I0417 23:57:59.158003 2146 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:57:59.158046 kubelet[2146]: I0417 23:57:59.158021 2146 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:57:59.158046 kubelet[2146]: I0417 23:57:59.158028 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:57:59.160108 kubelet[2146]: I0417 23:57:59.160068 2146 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:57:59.162796 kubelet[2146]: I0417 23:57:59.162749 2146 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:57:59.162796 kubelet[2146]: I0417 23:57:59.162788 2146 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:57:59.162844 kubelet[2146]: W0417 23:57:59.162831 2146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:57:59.165110 kubelet[2146]: I0417 23:57:59.165069 2146 server.go:1257] "Started kubelet" Apr 17 23:57:59.165408 kubelet[2146]: I0417 23:57:59.165242 2146 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:57:59.165408 kubelet[2146]: I0417 23:57:59.165309 2146 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:57:59.166191 kubelet[2146]: I0417 23:57:59.165320 2146 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:57:59.166431 kubelet[2146]: I0417 23:57:59.166392 2146 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:57:59.166942 kubelet[2146]: I0417 23:57:59.166802 2146 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:57:59.167551 kubelet[2146]: I0417 23:57:59.167526 2146 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:57:59.170314 kubelet[2146]: I0417 23:57:59.170277 2146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:57:59.171551 kubelet[2146]: I0417 23:57:59.170773 2146 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:57:59.171551 kubelet[2146]: E0417 23:57:59.170962 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:57:59.171551 kubelet[2146]: I0417 23:57:59.171076 2146 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:57:59.171551 kubelet[2146]: I0417 23:57:59.171128 2146 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:57:59.171551 kubelet[2146]: I0417 23:57:59.171447 2146 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:57:59.171651 kubelet[2146]: I0417 23:57:59.171601 2146 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:57:59.171823 kubelet[2146]: E0417 23:57:59.171606 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Apr 17 23:57:59.172894 kubelet[2146]: E0417 23:57:59.172831 2146 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:57:59.173366 kubelet[2146]: I0417 23:57:59.173342 2146 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:57:59.174448 kubelet[2146]: E0417 23:57:59.171431 2146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a74a4e87abb5cc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:57:59.165031884 +0000 UTC m=+0.343944210,LastTimestamp:2026-04-17 23:57:59.165031884 +0000 UTC m=+0.343944210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:57:59.177996 kubelet[2146]: I0417 23:57:59.177949 2146 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:57:59.185657 kubelet[2146]: I0417 23:57:59.185478 2146 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:57:59.185657 kubelet[2146]: I0417 23:57:59.185652 2146 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:57:59.185946 kubelet[2146]: I0417 23:57:59.185667 2146 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:57:59.187873 kubelet[2146]: I0417 23:57:59.187833 2146 policy_none.go:50] "Start" Apr 17 23:57:59.187873 kubelet[2146]: I0417 23:57:59.187873 2146 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:57:59.188000 kubelet[2146]: I0417 23:57:59.187891 2146 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:57:59.189754 kubelet[2146]: I0417 23:57:59.189728 2146 policy_none.go:44] "Start" Apr 17 23:57:59.193951 kubelet[2146]: I0417 23:57:59.193919 2146 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:57:59.194014 kubelet[2146]: I0417 23:57:59.193974 2146 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:57:59.194014 kubelet[2146]: I0417 23:57:59.193994 2146 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:57:59.194046 kubelet[2146]: E0417 23:57:59.194035 2146 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:57:59.195282 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:57:59.203143 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:57:59.205698 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:57:59.218207 kubelet[2146]: E0417 23:57:59.218163 2146 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:57:59.218474 kubelet[2146]: I0417 23:57:59.218348 2146 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:57:59.218474 kubelet[2146]: I0417 23:57:59.218374 2146 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:57:59.219059 kubelet[2146]: I0417 23:57:59.218911 2146 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:57:59.219334 kubelet[2146]: E0417 23:57:59.219280 2146 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:57:59.219334 kubelet[2146]: E0417 23:57:59.219329 2146 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:57:59.304039 systemd[1]: Created slice kubepods-burstable-podab2ce3fab6e10ee95be120ce329cd613.slice - libcontainer container kubepods-burstable-podab2ce3fab6e10ee95be120ce329cd613.slice. Apr 17 23:57:59.315388 kubelet[2146]: E0417 23:57:59.315335 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:57:59.318651 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 17 23:57:59.320603 kubelet[2146]: E0417 23:57:59.320577 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:57:59.320942 kubelet[2146]: I0417 23:57:59.320908 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:57:59.322085 kubelet[2146]: E0417 23:57:59.321198 2146 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Apr 17 23:57:59.322003 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 17 23:57:59.323390 kubelet[2146]: E0417 23:57:59.323367 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:57:59.372725 kubelet[2146]: E0417 23:57:59.372641 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Apr 17 23:57:59.473531 kubelet[2146]: I0417 23:57:59.473345 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2ce3fab6e10ee95be120ce329cd613-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab2ce3fab6e10ee95be120ce329cd613\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:57:59.473531 kubelet[2146]: I0417 23:57:59.473450 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:57:59.473531 kubelet[2146]: I0417 23:57:59.473466 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:57:59.473531 kubelet[2146]: I0417 23:57:59.473530 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:57:59.473531 kubelet[2146]: I0417 23:57:59.473569 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:57:59.473808 kubelet[2146]: I0417 23:57:59.473584 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:57:59.473808 kubelet[2146]: I0417 23:57:59.473599 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2ce3fab6e10ee95be120ce329cd613-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab2ce3fab6e10ee95be120ce329cd613\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:57:59.473808 kubelet[2146]: I0417 23:57:59.473615 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2ce3fab6e10ee95be120ce329cd613-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab2ce3fab6e10ee95be120ce329cd613\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:57:59.473808 kubelet[2146]: I0417 23:57:59.473629 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:57:59.523672 kubelet[2146]: I0417 23:57:59.523581 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:57:59.524009 kubelet[2146]: E0417 23:57:59.523969 2146 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Apr 17 23:57:59.619467 kubelet[2146]: E0417 23:57:59.619264 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:59.620192 containerd[1484]: time="2026-04-17T23:57:59.620130847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab2ce3fab6e10ee95be120ce329cd613,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:59.625416 kubelet[2146]: E0417 23:57:59.625347 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:59.626149 containerd[1484]: time="2026-04-17T23:57:59.625971146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:59.627587 kubelet[2146]: E0417 23:57:59.627475 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:57:59.628001 containerd[1484]: time="2026-04-17T23:57:59.627946576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 17 23:57:59.773530 kubelet[2146]: E0417 23:57:59.773324 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Apr 17 23:57:59.925957 kubelet[2146]: I0417 23:57:59.925813 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:57:59.926219 kubelet[2146]: E0417 23:57:59.926090 2146 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Apr 17 23:57:59.966782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838680462.mount: Deactivated successfully. Apr 17 23:57:59.972540 containerd[1484]: time="2026-04-17T23:57:59.972461915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:59.973247 containerd[1484]: time="2026-04-17T23:57:59.973137448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:57:59.976822 containerd[1484]: time="2026-04-17T23:57:59.976732670Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:59.977773 containerd[1484]: time="2026-04-17T23:57:59.977744482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:59.978581 containerd[1484]: time="2026-04-17T23:57:59.978534887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:59.979305 containerd[1484]: time="2026-04-17T23:57:59.979213977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:57:59.979925 containerd[1484]: time="2026-04-17T23:57:59.979841787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:57:59.980599 containerd[1484]: time="2026-04-17T23:57:59.980561037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:57:59.981368 containerd[1484]: time="2026-04-17T23:57:59.981260677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 361.039698ms" Apr 17 23:57:59.984007 containerd[1484]: time="2026-04-17T23:57:59.983959574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 357.920238ms" Apr 17 23:57:59.986730 containerd[1484]: time="2026-04-17T23:57:59.986682938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 358.66903ms" Apr 17 23:58:00.071623 containerd[1484]: time="2026-04-17T23:58:00.070962434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:00.071623 containerd[1484]: time="2026-04-17T23:58:00.071081438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:00.071623 containerd[1484]: time="2026-04-17T23:58:00.071275848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:00.072623 containerd[1484]: time="2026-04-17T23:58:00.072360802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:00.072623 containerd[1484]: time="2026-04-17T23:58:00.072390341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:00.072623 containerd[1484]: time="2026-04-17T23:58:00.072409158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:00.072623 containerd[1484]: time="2026-04-17T23:58:00.072462754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:00.072623 containerd[1484]: time="2026-04-17T23:58:00.072237226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:00.074882 containerd[1484]: time="2026-04-17T23:58:00.074794187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:00.074928 containerd[1484]: time="2026-04-17T23:58:00.074839411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:00.074928 containerd[1484]: time="2026-04-17T23:58:00.074850514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:00.074928 containerd[1484]: time="2026-04-17T23:58:00.074901814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:00.099716 systemd[1]: Started cri-containerd-04e3fb1ab859b00b7ceebe6add4933b47c65006778b86ddfe32b3e6ce8123b25.scope - libcontainer container 04e3fb1ab859b00b7ceebe6add4933b47c65006778b86ddfe32b3e6ce8123b25. Apr 17 23:58:00.100898 systemd[1]: Started cri-containerd-1ba956a58d08329b343843ebd2e6364bac33744d4f3c1c43b6ea7e08e7166d74.scope - libcontainer container 1ba956a58d08329b343843ebd2e6364bac33744d4f3c1c43b6ea7e08e7166d74. Apr 17 23:58:00.102147 systemd[1]: Started cri-containerd-96d1f2ebfbffdb4c7771241bec2cc2ecb34bf0e1772f6712c82190e057e32714.scope - libcontainer container 96d1f2ebfbffdb4c7771241bec2cc2ecb34bf0e1772f6712c82190e057e32714. Apr 17 23:58:00.141638 containerd[1484]: time="2026-04-17T23:58:00.141578524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"04e3fb1ab859b00b7ceebe6add4933b47c65006778b86ddfe32b3e6ce8123b25\"" Apr 17 23:58:00.143936 containerd[1484]: time="2026-04-17T23:58:00.143878515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d1f2ebfbffdb4c7771241bec2cc2ecb34bf0e1772f6712c82190e057e32714\"" Apr 17 23:58:00.144581 kubelet[2146]: E0417 23:58:00.144567 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:00.144691 containerd[1484]: time="2026-04-17T23:58:00.144647819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab2ce3fab6e10ee95be120ce329cd613,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ba956a58d08329b343843ebd2e6364bac33744d4f3c1c43b6ea7e08e7166d74\"" Apr 17 23:58:00.145691 kubelet[2146]: E0417 23:58:00.145465 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:00.145691 kubelet[2146]: E0417 23:58:00.145602 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:00.152178 containerd[1484]: time="2026-04-17T23:58:00.152145309Z" level=info msg="CreateContainer within sandbox \"1ba956a58d08329b343843ebd2e6364bac33744d4f3c1c43b6ea7e08e7166d74\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:58:00.153475 containerd[1484]: time="2026-04-17T23:58:00.153451906Z" level=info msg="CreateContainer within sandbox \"04e3fb1ab859b00b7ceebe6add4933b47c65006778b86ddfe32b3e6ce8123b25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:58:00.155630 containerd[1484]: time="2026-04-17T23:58:00.155588141Z" level=info msg="CreateContainer within sandbox \"96d1f2ebfbffdb4c7771241bec2cc2ecb34bf0e1772f6712c82190e057e32714\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:58:00.174883 containerd[1484]: time="2026-04-17T23:58:00.174809050Z" level=info msg="CreateContainer within sandbox \"1ba956a58d08329b343843ebd2e6364bac33744d4f3c1c43b6ea7e08e7166d74\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f91cd324e332d4e2d5fe58c6cef43a8211c6b09085bd69d24a5af4ee41fd0802\"" Apr 17 23:58:00.175514 containerd[1484]: time="2026-04-17T23:58:00.175403005Z" level=info msg="StartContainer for \"f91cd324e332d4e2d5fe58c6cef43a8211c6b09085bd69d24a5af4ee41fd0802\"" Apr 17 23:58:00.177374 containerd[1484]: time="2026-04-17T23:58:00.176119894Z" level=info msg="CreateContainer within sandbox \"96d1f2ebfbffdb4c7771241bec2cc2ecb34bf0e1772f6712c82190e057e32714\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3777aabab5afac3db5fe491108b807072befa43823efde0b71074c3d412e811\"" Apr 17 23:58:00.177374 containerd[1484]: time="2026-04-17T23:58:00.176464342Z" level=info msg="StartContainer for \"e3777aabab5afac3db5fe491108b807072befa43823efde0b71074c3d412e811\"" Apr 17 23:58:00.177569 containerd[1484]: time="2026-04-17T23:58:00.177515952Z" level=info msg="CreateContainer within sandbox \"04e3fb1ab859b00b7ceebe6add4933b47c65006778b86ddfe32b3e6ce8123b25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"190754ee5b3b2ef8fa7c6ac5941aaf0094b30c5143631823abde4acc7f332d66\"" Apr 17 23:58:00.177988 containerd[1484]: time="2026-04-17T23:58:00.177874453Z" level=info msg="StartContainer for \"190754ee5b3b2ef8fa7c6ac5941aaf0094b30c5143631823abde4acc7f332d66\"" Apr 17 23:58:00.202092 systemd[1]: Started cri-containerd-f91cd324e332d4e2d5fe58c6cef43a8211c6b09085bd69d24a5af4ee41fd0802.scope - libcontainer container f91cd324e332d4e2d5fe58c6cef43a8211c6b09085bd69d24a5af4ee41fd0802. Apr 17 23:58:00.205022 systemd[1]: Started cri-containerd-190754ee5b3b2ef8fa7c6ac5941aaf0094b30c5143631823abde4acc7f332d66.scope - libcontainer container 190754ee5b3b2ef8fa7c6ac5941aaf0094b30c5143631823abde4acc7f332d66. Apr 17 23:58:00.205773 systemd[1]: Started cri-containerd-e3777aabab5afac3db5fe491108b807072befa43823efde0b71074c3d412e811.scope - libcontainer container e3777aabab5afac3db5fe491108b807072befa43823efde0b71074c3d412e811. Apr 17 23:58:00.246095 containerd[1484]: time="2026-04-17T23:58:00.246026256Z" level=info msg="StartContainer for \"f91cd324e332d4e2d5fe58c6cef43a8211c6b09085bd69d24a5af4ee41fd0802\" returns successfully" Apr 17 23:58:00.246702 containerd[1484]: time="2026-04-17T23:58:00.246686471Z" level=info msg="StartContainer for \"190754ee5b3b2ef8fa7c6ac5941aaf0094b30c5143631823abde4acc7f332d66\" returns successfully" Apr 17 23:58:00.253945 containerd[1484]: time="2026-04-17T23:58:00.253901845Z" level=info msg="StartContainer for \"e3777aabab5afac3db5fe491108b807072befa43823efde0b71074c3d412e811\" returns successfully" Apr 17 23:58:00.728046 kubelet[2146]: I0417 23:58:00.727946 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:58:01.178892 kubelet[2146]: E0417 23:58:01.178805 2146 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:58:01.210397 kubelet[2146]: E0417 23:58:01.210330 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:58:01.210553 kubelet[2146]: E0417 23:58:01.210460 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:01.211803 kubelet[2146]: E0417 23:58:01.211777 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:58:01.211928 kubelet[2146]: E0417 23:58:01.211918 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:01.212900 kubelet[2146]: E0417 23:58:01.212874 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:58:01.213044 kubelet[2146]: E0417 23:58:01.212972 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:01.372559 kubelet[2146]: I0417 23:58:01.372455 2146 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 23:58:01.472712 kubelet[2146]: I0417 23:58:01.472462 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:01.478121 kubelet[2146]: E0417 23:58:01.478090 2146 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:01.478121 kubelet[2146]: I0417 23:58:01.478124 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:01.479596 kubelet[2146]: E0417 23:58:01.479556 2146 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:01.479596 kubelet[2146]: I0417 23:58:01.479584 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:01.480822 kubelet[2146]: E0417 23:58:01.480775 2146 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:02.159765 kubelet[2146]: I0417 23:58:02.159690 2146 apiserver.go:52] "Watching apiserver" Apr 17 23:58:02.171673 kubelet[2146]: I0417 23:58:02.171601 2146 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:58:02.214216 kubelet[2146]: I0417 23:58:02.214143 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:02.214607 kubelet[2146]: I0417 23:58:02.214294 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:02.214607 kubelet[2146]: I0417 23:58:02.214327 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:02.220654 kubelet[2146]: E0417 23:58:02.220606 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:02.220757 kubelet[2146]: E0417 23:58:02.220615 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:02.221094 kubelet[2146]: E0417 23:58:02.221051 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:03.216142 kubelet[2146]: E0417 23:58:03.216026 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:03.216468 kubelet[2146]: I0417 23:58:03.216197 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.216468 kubelet[2146]: I0417 23:58:03.216240 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:03.226249 kubelet[2146]: E0417 23:58:03.226024 2146 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:03.226249 kubelet[2146]: E0417 23:58:03.226177 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:03.226416 kubelet[2146]: E0417 23:58:03.226378 2146 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.226599 kubelet[2146]: E0417 23:58:03.226555 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:03.263745 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)... Apr 17 23:58:03.263778 systemd[1]: Reloading... Apr 17 23:58:03.309580 zram_generator::config[2473]: No configuration found. Apr 17 23:58:03.385757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:58:03.447764 systemd[1]: Reloading finished in 183 ms. Apr 17 23:58:03.488404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:58:03.510609 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:58:03.510825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:58:03.520994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:58:03.620219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:58:03.624069 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:58:03.656905 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:58:03.664147 kubelet[2518]: I0417 23:58:03.662824 2518 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:58:03.664147 kubelet[2518]: I0417 23:58:03.662884 2518 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:58:03.664147 kubelet[2518]: I0417 23:58:03.662896 2518 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:58:03.664147 kubelet[2518]: I0417 23:58:03.662900 2518 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:58:03.664147 kubelet[2518]: I0417 23:58:03.663568 2518 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:58:03.665259 kubelet[2518]: I0417 23:58:03.665213 2518 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:58:03.668443 kubelet[2518]: I0417 23:58:03.668391 2518 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:58:03.672158 kubelet[2518]: E0417 23:58:03.672097 2518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:58:03.672158 kubelet[2518]: I0417 23:58:03.672172 2518 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:58:03.675242 kubelet[2518]: I0417 23:58:03.675222 2518 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:58:03.675423 kubelet[2518]: I0417 23:58:03.675371 2518 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:58:03.675569 kubelet[2518]: I0417 23:58:03.675405 2518 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:58:03.675569 kubelet[2518]: I0417 23:58:03.675564 2518 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:58:03.675569 kubelet[2518]: I0417 23:58:03.675570 2518 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:58:03.675680 kubelet[2518]: I0417 23:58:03.675585 2518 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:58:03.675766 kubelet[2518]: I0417 23:58:03.675742 2518 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:58:03.675944 kubelet[2518]: I0417 23:58:03.675896 2518 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:58:03.675944 kubelet[2518]: I0417 23:58:03.675938 2518 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:58:03.675981 kubelet[2518]: I0417 23:58:03.675953 2518 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:58:03.675981 kubelet[2518]: I0417 23:58:03.675960 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:58:03.680137 kubelet[2518]: I0417 23:58:03.680095 2518 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:58:03.681144 kubelet[2518]: I0417 23:58:03.681108 2518 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:58:03.681144 kubelet[2518]: I0417 23:58:03.681142 2518 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:58:03.684022 kubelet[2518]: I0417 23:58:03.683952 2518 server.go:1257] "Started kubelet" Apr 17 23:58:03.686968 kubelet[2518]: I0417 23:58:03.686834 2518 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:58:03.687268 kubelet[2518]: I0417 23:58:03.687240 2518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:58:03.688172 kubelet[2518]: I0417 23:58:03.687888 2518 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:58:03.688172 kubelet[2518]: I0417 23:58:03.687997 2518 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:58:03.688172 kubelet[2518]: I0417 23:58:03.688114 2518 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:58:03.688525 kubelet[2518]: I0417 23:58:03.688476 2518 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:58:03.688628 kubelet[2518]: I0417 23:58:03.688581 2518 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:58:03.688713 kubelet[2518]: I0417 23:58:03.688666 2518 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:58:03.689663 kubelet[2518]: I0417 23:58:03.689637 2518 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:58:03.690474 kubelet[2518]: I0417 23:58:03.690384 2518 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:58:03.690729 kubelet[2518]: I0417 23:58:03.690694 2518 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:58:03.690883 kubelet[2518]: I0417 23:58:03.690873 2518 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:58:03.691586 kubelet[2518]: I0417 23:58:03.691008 2518 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:58:03.695107 kubelet[2518]: E0417 23:58:03.695085 2518 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:58:03.701366 kubelet[2518]: I0417 23:58:03.701297 2518 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:58:03.702427 kubelet[2518]: I0417 23:58:03.702397 2518 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:58:03.702427 kubelet[2518]: I0417 23:58:03.702427 2518 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:58:03.702534 kubelet[2518]: I0417 23:58:03.702451 2518 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:58:03.702550 kubelet[2518]: E0417 23:58:03.702530 2518 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:58:03.718141 kubelet[2518]: I0417 23:58:03.718124 2518 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:58:03.718258 kubelet[2518]: I0417 23:58:03.718223 2518 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:58:03.718313 kubelet[2518]: I0417 23:58:03.718307 2518 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:58:03.718425 kubelet[2518]: I0417 23:58:03.718417 2518 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:58:03.718472 kubelet[2518]: I0417 23:58:03.718459 2518 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:58:03.718546 kubelet[2518]: I0417 23:58:03.718542 2518 policy_none.go:50] "Start" Apr 17 23:58:03.718581 kubelet[2518]: I0417 23:58:03.718578 2518 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:58:03.718608 kubelet[2518]: I0417 23:58:03.718604 2518 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:58:03.718709 kubelet[2518]: I0417 23:58:03.718704 2518 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:58:03.718735 kubelet[2518]: I0417 23:58:03.718732 2518 policy_none.go:44] "Start" Apr 17 23:58:03.722883 kubelet[2518]: E0417 23:58:03.722847 2518 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:58:03.722999 kubelet[2518]: I0417 23:58:03.722991 2518 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:58:03.723027 kubelet[2518]: I0417 23:58:03.723005 2518 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:58:03.723255 kubelet[2518]: I0417 23:58:03.723234 2518 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:58:03.725526 kubelet[2518]: E0417 23:58:03.725460 2518 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:58:03.803759 kubelet[2518]: I0417 23:58:03.803668 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.803759 kubelet[2518]: I0417 23:58:03.803739 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.803989 kubelet[2518]: I0417 23:58:03.803740 2518 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:03.811190 kubelet[2518]: E0417 23:58:03.811098 2518 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.811190 kubelet[2518]: E0417 23:58:03.811183 2518 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.811410 kubelet[2518]: E0417 23:58:03.811234 2518 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:03.830775 kubelet[2518]: I0417 23:58:03.830714 2518 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:58:03.837271 kubelet[2518]: I0417 23:58:03.837225 2518 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 17 23:58:03.837331 kubelet[2518]: I0417 23:58:03.837288 2518 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 23:58:03.888961 kubelet[2518]: I0417 23:58:03.888886 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2ce3fab6e10ee95be120ce329cd613-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab2ce3fab6e10ee95be120ce329cd613\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.888961 kubelet[2518]: I0417 23:58:03.888928 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2ce3fab6e10ee95be120ce329cd613-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab2ce3fab6e10ee95be120ce329cd613\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.888961 kubelet[2518]: I0417 23:58:03.888952 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.888961 kubelet[2518]: I0417 23:58:03.888967 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.888961 kubelet[2518]: I0417 23:58:03.888981 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.889219 kubelet[2518]: I0417 23:58:03.888996 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.889219 kubelet[2518]: I0417 23:58:03.889010 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2ce3fab6e10ee95be120ce329cd613-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab2ce3fab6e10ee95be120ce329cd613\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:58:03.889219 kubelet[2518]: I0417 23:58:03.889023 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:58:03.889219 kubelet[2518]: I0417 23:58:03.889036 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:58:04.112205 kubelet[2518]: E0417 23:58:04.112022 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:04.112205 kubelet[2518]: E0417 23:58:04.112038 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:04.112205 kubelet[2518]: E0417 23:58:04.112051 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:04.261861 sudo[2561]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:58:04.262085 sudo[2561]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:58:04.677768 kubelet[2518]: I0417 23:58:04.677712 2518 apiserver.go:52] "Watching apiserver" Apr 17 23:58:04.689110 kubelet[2518]: I0417 23:58:04.689066 2518 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:58:04.710074 sudo[2561]: pam_unix(sudo:session): session closed for user root Apr 17 23:58:04.712469 kubelet[2518]: E0417 23:58:04.712401 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:04.712469 kubelet[2518]: E0417 23:58:04.712471 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:04.712661 kubelet[2518]: E0417 23:58:04.712646 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:04.736699 kubelet[2518]: I0417 23:58:04.736627 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.7366145619999998 podStartE2EDuration="2.736614562s" podCreationTimestamp="2026-04-17 23:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:04.728937904 +0000 UTC m=+1.101268012" watchObservedRunningTime="2026-04-17 23:58:04.736614562 +0000 UTC m=+1.108944667" Apr 17 23:58:04.743225 kubelet[2518]: I0417 23:58:04.743142 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.743128122 podStartE2EDuration="2.743128122s" podCreationTimestamp="2026-04-17 23:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:04.736763838 +0000 UTC m=+1.109093936" watchObservedRunningTime="2026-04-17 23:58:04.743128122 +0000 UTC m=+1.115458219" Apr 17 23:58:04.833023 kubelet[2518]: I0417 23:58:04.832942 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.832929691 podStartE2EDuration="2.832929691s" podCreationTimestamp="2026-04-17 23:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:04.743321331 +0000 UTC m=+1.115651440" watchObservedRunningTime="2026-04-17 23:58:04.832929691 +0000 UTC m=+1.205259789" Apr 17 23:58:05.714002 kubelet[2518]: E0417 23:58:05.713880 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:05.714002 kubelet[2518]: E0417 23:58:05.713922 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:05.856672 sudo[1651]: pam_unix(sudo:session): session closed for user root Apr 17 23:58:05.857884 sshd[1648]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:05.860436 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:34112.service: Deactivated successfully. Apr 17 23:58:05.861836 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:58:05.861983 systemd[1]: session-7.scope: Consumed 3.089s CPU time, 158.2M memory peak, 0B memory swap peak. Apr 17 23:58:05.862405 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:58:05.864187 systemd-logind[1458]: Removed session 7. Apr 17 23:58:06.717159 kubelet[2518]: E0417 23:58:06.716059 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:06.717159 kubelet[2518]: E0417 23:58:06.716382 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:07.718299 kubelet[2518]: E0417 23:58:07.718236 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:09.693054 kubelet[2518]: E0417 23:58:09.692714 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:10.308251 kubelet[2518]: I0417 23:58:10.308209 2518 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:58:10.308714 containerd[1484]: time="2026-04-17T23:58:10.308677787Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:58:10.308972 kubelet[2518]: I0417 23:58:10.308888 2518 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:58:11.383122 systemd[1]: Created slice kubepods-besteffort-podefae55df_4ca3_4f2e_a856_5980809e2c7e.slice - libcontainer container kubepods-besteffort-podefae55df_4ca3_4f2e_a856_5980809e2c7e.slice. Apr 17 23:58:11.397258 systemd[1]: Created slice kubepods-burstable-pod36bd0be9_f134_4fb0_80d6_1445a0562501.slice - libcontainer container kubepods-burstable-pod36bd0be9_f134_4fb0_80d6_1445a0562501.slice. Apr 17 23:58:11.529675 kubelet[2518]: I0417 23:58:11.529588 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efae55df-4ca3-4f2e-a856-5980809e2c7e-lib-modules\") pod \"kube-proxy-5v5r5\" (UID: \"efae55df-4ca3-4f2e-a856-5980809e2c7e\") " pod="kube-system/kube-proxy-5v5r5" Apr 17 23:58:11.529675 kubelet[2518]: I0417 23:58:11.529639 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-bpf-maps\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.529675 kubelet[2518]: I0417 23:58:11.529667 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-cgroup\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.529675 kubelet[2518]: I0417 23:58:11.529685 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cni-path\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.529675 kubelet[2518]: I0417 23:58:11.529702 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-hubble-tls\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530215 kubelet[2518]: I0417 23:58:11.529725 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjscp\" (UniqueName: \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-kube-api-access-cjscp\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530215 kubelet[2518]: I0417 23:58:11.529790 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efae55df-4ca3-4f2e-a856-5980809e2c7e-kube-proxy\") pod \"kube-proxy-5v5r5\" (UID: \"efae55df-4ca3-4f2e-a856-5980809e2c7e\") " pod="kube-system/kube-proxy-5v5r5" Apr 17 23:58:11.530215 kubelet[2518]: I0417 23:58:11.529813 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-run\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530215 kubelet[2518]: I0417 23:58:11.529874 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-hostproc\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530215 kubelet[2518]: I0417 23:58:11.529899 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-etc-cni-netd\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530215 kubelet[2518]: I0417 23:58:11.529918 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-lib-modules\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530341 kubelet[2518]: I0417 23:58:11.529946 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-xtables-lock\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530341 kubelet[2518]: I0417 23:58:11.529968 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36bd0be9-f134-4fb0-80d6-1445a0562501-clustermesh-secrets\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530341 kubelet[2518]: I0417 23:58:11.529985 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-net\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530341 kubelet[2518]: I0417 23:58:11.530004 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-kernel\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.530341 kubelet[2518]: I0417 23:58:11.530041 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efae55df-4ca3-4f2e-a856-5980809e2c7e-xtables-lock\") pod \"kube-proxy-5v5r5\" (UID: \"efae55df-4ca3-4f2e-a856-5980809e2c7e\") " pod="kube-system/kube-proxy-5v5r5" Apr 17 23:58:11.530434 kubelet[2518]: I0417 23:58:11.530075 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdrs\" (UniqueName: \"kubernetes.io/projected/efae55df-4ca3-4f2e-a856-5980809e2c7e-kube-api-access-qkdrs\") pod \"kube-proxy-5v5r5\" (UID: \"efae55df-4ca3-4f2e-a856-5980809e2c7e\") " pod="kube-system/kube-proxy-5v5r5" Apr 17 23:58:11.530434 kubelet[2518]: I0417 23:58:11.530091 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-config-path\") pod \"cilium-w7vjk\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " pod="kube-system/cilium-w7vjk" Apr 17 23:58:11.576579 systemd[1]: Created slice kubepods-besteffort-pod45b72dfc_29c1_45d3_9925_c10688b0cc83.slice - libcontainer container kubepods-besteffort-pod45b72dfc_29c1_45d3_9925_c10688b0cc83.slice. Apr 17 23:58:11.696343 kubelet[2518]: E0417 23:58:11.696183 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:11.696909 containerd[1484]: time="2026-04-17T23:58:11.696861046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5v5r5,Uid:efae55df-4ca3-4f2e-a856-5980809e2c7e,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:11.703337 kubelet[2518]: E0417 23:58:11.703298 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:11.704077 containerd[1484]: time="2026-04-17T23:58:11.703710324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7vjk,Uid:36bd0be9-f134-4fb0-80d6-1445a0562501,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:11.716982 containerd[1484]: time="2026-04-17T23:58:11.716895556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:11.717057 containerd[1484]: time="2026-04-17T23:58:11.717005790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:11.717057 containerd[1484]: time="2026-04-17T23:58:11.717019577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:11.717313 containerd[1484]: time="2026-04-17T23:58:11.717236082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:11.726782 containerd[1484]: time="2026-04-17T23:58:11.726551174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:11.726782 containerd[1484]: time="2026-04-17T23:58:11.726614895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:11.726782 containerd[1484]: time="2026-04-17T23:58:11.726628715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:11.726782 containerd[1484]: time="2026-04-17T23:58:11.726696120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:11.730995 kubelet[2518]: I0417 23:58:11.730970 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45b72dfc-29c1-45d3-9925-c10688b0cc83-cilium-config-path\") pod \"cilium-operator-78cf5644cb-qct8j\" (UID: \"45b72dfc-29c1-45d3-9925-c10688b0cc83\") " pod="kube-system/cilium-operator-78cf5644cb-qct8j" Apr 17 23:58:11.731218 kubelet[2518]: I0417 23:58:11.731165 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b28h4\" (UniqueName: \"kubernetes.io/projected/45b72dfc-29c1-45d3-9925-c10688b0cc83-kube-api-access-b28h4\") pod \"cilium-operator-78cf5644cb-qct8j\" (UID: \"45b72dfc-29c1-45d3-9925-c10688b0cc83\") " pod="kube-system/cilium-operator-78cf5644cb-qct8j" Apr 17 23:58:11.734772 systemd[1]: Started cri-containerd-42a02017d098a735bc9e28dbd117fb8a1d08667d8ccf8ff1584955ef1258ef20.scope - libcontainer container 42a02017d098a735bc9e28dbd117fb8a1d08667d8ccf8ff1584955ef1258ef20. Apr 17 23:58:11.738162 systemd[1]: Started cri-containerd-3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e.scope - libcontainer container 3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e. Apr 17 23:58:11.755634 containerd[1484]: time="2026-04-17T23:58:11.755571436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5v5r5,Uid:efae55df-4ca3-4f2e-a856-5980809e2c7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"42a02017d098a735bc9e28dbd117fb8a1d08667d8ccf8ff1584955ef1258ef20\"" Apr 17 23:58:11.755720 containerd[1484]: time="2026-04-17T23:58:11.755649950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7vjk,Uid:36bd0be9-f134-4fb0-80d6-1445a0562501,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\"" Apr 17 23:58:11.756404 kubelet[2518]: E0417 23:58:11.756382 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:11.756843 kubelet[2518]: E0417 23:58:11.756784 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:11.758958 containerd[1484]: time="2026-04-17T23:58:11.758785775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:58:11.761817 containerd[1484]: time="2026-04-17T23:58:11.761716562Z" level=info msg="CreateContainer within sandbox \"42a02017d098a735bc9e28dbd117fb8a1d08667d8ccf8ff1584955ef1258ef20\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:58:11.776710 containerd[1484]: time="2026-04-17T23:58:11.776641487Z" level=info msg="CreateContainer within sandbox \"42a02017d098a735bc9e28dbd117fb8a1d08667d8ccf8ff1584955ef1258ef20\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c0d2a4499c23631f71c7c548dabb1827611c53b59163e1f3a05a4c9a3c6d4324\"" Apr 17 23:58:11.777307 containerd[1484]: time="2026-04-17T23:58:11.777290910Z" level=info msg="StartContainer for \"c0d2a4499c23631f71c7c548dabb1827611c53b59163e1f3a05a4c9a3c6d4324\"" Apr 17 23:58:11.806769 systemd[1]: Started cri-containerd-c0d2a4499c23631f71c7c548dabb1827611c53b59163e1f3a05a4c9a3c6d4324.scope - libcontainer container c0d2a4499c23631f71c7c548dabb1827611c53b59163e1f3a05a4c9a3c6d4324. Apr 17 23:58:11.829536 containerd[1484]: time="2026-04-17T23:58:11.829465171Z" level=info msg="StartContainer for \"c0d2a4499c23631f71c7c548dabb1827611c53b59163e1f3a05a4c9a3c6d4324\" returns successfully" Apr 17 23:58:11.881899 kubelet[2518]: E0417 23:58:11.881834 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:11.882730 containerd[1484]: time="2026-04-17T23:58:11.882691805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-qct8j,Uid:45b72dfc-29c1-45d3-9925-c10688b0cc83,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:11.920957 containerd[1484]: time="2026-04-17T23:58:11.920662385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:11.920957 containerd[1484]: time="2026-04-17T23:58:11.920762959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:11.920957 containerd[1484]: time="2026-04-17T23:58:11.920788375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:11.920957 containerd[1484]: time="2026-04-17T23:58:11.920853928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:11.938652 systemd[1]: Started cri-containerd-0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db.scope - libcontainer container 0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db. Apr 17 23:58:11.971015 containerd[1484]: time="2026-04-17T23:58:11.970874585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-qct8j,Uid:45b72dfc-29c1-45d3-9925-c10688b0cc83,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db\"" Apr 17 23:58:11.971682 kubelet[2518]: E0417 23:58:11.971649 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:12.729104 kubelet[2518]: E0417 23:58:12.729057 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:12.741639 kubelet[2518]: I0417 23:58:12.741431 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-5v5r5" podStartSLOduration=1.741407596 podStartE2EDuration="1.741407596s" podCreationTimestamp="2026-04-17 23:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:12.741332013 +0000 UTC m=+9.113662110" watchObservedRunningTime="2026-04-17 23:58:12.741407596 +0000 UTC m=+9.113737704" Apr 17 23:58:13.125914 kubelet[2518]: E0417 23:58:13.125844 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:15.793412 kubelet[2518]: E0417 23:58:15.793342 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:17.873473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035879916.mount: Deactivated successfully. Apr 17 23:58:19.114370 containerd[1484]: time="2026-04-17T23:58:19.114284037Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:19.114972 containerd[1484]: time="2026-04-17T23:58:19.114943260Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:58:19.118008 containerd[1484]: time="2026-04-17T23:58:19.117944876Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:19.119225 containerd[1484]: time="2026-04-17T23:58:19.119186965Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.360189011s" Apr 17 23:58:19.119272 containerd[1484]: time="2026-04-17T23:58:19.119226001Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:58:19.120300 containerd[1484]: time="2026-04-17T23:58:19.120251465Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:58:19.125982 containerd[1484]: time="2026-04-17T23:58:19.125945362Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:58:19.139270 containerd[1484]: time="2026-04-17T23:58:19.139220827Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\"" Apr 17 23:58:19.139773 containerd[1484]: time="2026-04-17T23:58:19.139735799Z" level=info msg="StartContainer for \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\"" Apr 17 23:58:19.164756 systemd[1]: Started cri-containerd-538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548.scope - libcontainer container 538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548. Apr 17 23:58:19.202347 systemd[1]: cri-containerd-538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548.scope: Deactivated successfully. Apr 17 23:58:19.213103 containerd[1484]: time="2026-04-17T23:58:19.213039059Z" level=info msg="StartContainer for \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\" returns successfully" Apr 17 23:58:19.283040 containerd[1484]: time="2026-04-17T23:58:19.282960274Z" level=info msg="shim disconnected" id=538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548 namespace=k8s.io Apr 17 23:58:19.283316 containerd[1484]: time="2026-04-17T23:58:19.283265214Z" level=warning msg="cleaning up after shim disconnected" id=538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548 namespace=k8s.io Apr 17 23:58:19.283316 containerd[1484]: time="2026-04-17T23:58:19.283299029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:58:19.698660 kubelet[2518]: E0417 23:58:19.698614 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:19.743388 kubelet[2518]: E0417 23:58:19.743164 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:19.747630 containerd[1484]: time="2026-04-17T23:58:19.747467925Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:58:19.771202 containerd[1484]: time="2026-04-17T23:58:19.771130714Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\"" Apr 17 23:58:19.772006 containerd[1484]: time="2026-04-17T23:58:19.771970944Z" level=info msg="StartContainer for \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\"" Apr 17 23:58:19.800948 systemd[1]: Started cri-containerd-b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519.scope - libcontainer container b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519. Apr 17 23:58:19.819038 containerd[1484]: time="2026-04-17T23:58:19.818956943Z" level=info msg="StartContainer for \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\" returns successfully" Apr 17 23:58:19.829198 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:58:19.829355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:58:19.829404 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:58:19.835331 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:58:19.835593 systemd[1]: cri-containerd-b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519.scope: Deactivated successfully. Apr 17 23:58:19.869246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:58:19.870195 containerd[1484]: time="2026-04-17T23:58:19.870124146Z" level=info msg="shim disconnected" id=b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519 namespace=k8s.io Apr 17 23:58:19.870195 containerd[1484]: time="2026-04-17T23:58:19.870171210Z" level=warning msg="cleaning up after shim disconnected" id=b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519 namespace=k8s.io Apr 17 23:58:19.870195 containerd[1484]: time="2026-04-17T23:58:19.870180686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:58:20.135564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548-rootfs.mount: Deactivated successfully. Apr 17 23:58:20.496388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983798225.mount: Deactivated successfully. Apr 17 23:58:20.749022 kubelet[2518]: E0417 23:58:20.748808 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:20.752943 containerd[1484]: time="2026-04-17T23:58:20.752783946Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:58:20.768761 containerd[1484]: time="2026-04-17T23:58:20.768552459Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:20.768875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601769564.mount: Deactivated successfully. Apr 17 23:58:20.769140 containerd[1484]: time="2026-04-17T23:58:20.769102727Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:58:20.773315 containerd[1484]: time="2026-04-17T23:58:20.773263758Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:58:20.773550 containerd[1484]: time="2026-04-17T23:58:20.773527518Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\"" Apr 17 23:58:20.774999 containerd[1484]: time="2026-04-17T23:58:20.774022087Z" level=info msg="StartContainer for \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\"" Apr 17 23:58:20.774999 containerd[1484]: time="2026-04-17T23:58:20.774917704Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.654627383s" Apr 17 23:58:20.774999 containerd[1484]: time="2026-04-17T23:58:20.774944666Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:58:20.780433 containerd[1484]: time="2026-04-17T23:58:20.780373985Z" level=info msg="CreateContainer within sandbox \"0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:58:20.801753 systemd[1]: Started cri-containerd-8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c.scope - libcontainer container 8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c. Apr 17 23:58:20.833524 containerd[1484]: time="2026-04-17T23:58:20.833422519Z" level=info msg="StartContainer for \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\" returns successfully" Apr 17 23:58:20.835349 systemd[1]: cri-containerd-8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c.scope: Deactivated successfully. Apr 17 23:58:20.836335 containerd[1484]: time="2026-04-17T23:58:20.836257954Z" level=info msg="CreateContainer within sandbox \"0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\"" Apr 17 23:58:20.836849 containerd[1484]: time="2026-04-17T23:58:20.836776232Z" level=info msg="StartContainer for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\"" Apr 17 23:58:20.861675 systemd[1]: Started cri-containerd-0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf.scope - libcontainer container 0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf. Apr 17 23:58:20.867783 containerd[1484]: time="2026-04-17T23:58:20.867712836Z" level=info msg="shim disconnected" id=8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c namespace=k8s.io Apr 17 23:58:20.867783 containerd[1484]: time="2026-04-17T23:58:20.867765586Z" level=warning msg="cleaning up after shim disconnected" id=8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c namespace=k8s.io Apr 17 23:58:20.867783 containerd[1484]: time="2026-04-17T23:58:20.867772954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:58:20.886013 containerd[1484]: time="2026-04-17T23:58:20.885941280Z" level=info msg="StartContainer for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" returns successfully" Apr 17 23:58:21.754016 kubelet[2518]: E0417 23:58:21.753962 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:21.759436 kubelet[2518]: E0417 23:58:21.759382 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:21.760298 containerd[1484]: time="2026-04-17T23:58:21.760178132Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:58:21.778916 containerd[1484]: time="2026-04-17T23:58:21.778854178Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\"" Apr 17 23:58:21.781041 containerd[1484]: time="2026-04-17T23:58:21.780784841Z" level=info msg="StartContainer for \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\"" Apr 17 23:58:21.842691 systemd[1]: Started cri-containerd-1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec.scope - libcontainer container 1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec. Apr 17 23:58:21.877661 systemd[1]: cri-containerd-1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec.scope: Deactivated successfully. Apr 17 23:58:21.879191 containerd[1484]: time="2026-04-17T23:58:21.879130388Z" level=info msg="StartContainer for \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\" returns successfully" Apr 17 23:58:21.901966 containerd[1484]: time="2026-04-17T23:58:21.901898139Z" level=info msg="shim disconnected" id=1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec namespace=k8s.io Apr 17 23:58:21.901966 containerd[1484]: time="2026-04-17T23:58:21.901956349Z" level=warning msg="cleaning up after shim disconnected" id=1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec namespace=k8s.io Apr 17 23:58:21.901966 containerd[1484]: time="2026-04-17T23:58:21.901963955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:58:22.135758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec-rootfs.mount: Deactivated successfully. Apr 17 23:58:22.760802 kubelet[2518]: E0417 23:58:22.760746 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:22.761181 kubelet[2518]: E0417 23:58:22.760847 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:22.764974 containerd[1484]: time="2026-04-17T23:58:22.764918589Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:58:22.783582 kubelet[2518]: I0417 23:58:22.781696 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-qct8j" podStartSLOduration=2.978630636 podStartE2EDuration="11.781671843s" podCreationTimestamp="2026-04-17 23:58:11 +0000 UTC" firstStartedPulling="2026-04-17 23:58:11.972394202 +0000 UTC m=+8.344724299" lastFinishedPulling="2026-04-17 23:58:20.775435409 +0000 UTC m=+17.147765506" observedRunningTime="2026-04-17 23:58:21.777087289 +0000 UTC m=+18.149417399" watchObservedRunningTime="2026-04-17 23:58:22.781671843 +0000 UTC m=+19.154001943" Apr 17 23:58:22.785659 containerd[1484]: time="2026-04-17T23:58:22.785572471Z" level=info msg="CreateContainer within sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\"" Apr 17 23:58:22.786306 containerd[1484]: time="2026-04-17T23:58:22.786177876Z" level=info msg="StartContainer for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\"" Apr 17 23:58:22.828705 systemd[1]: Started cri-containerd-8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3.scope - libcontainer container 8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3. Apr 17 23:58:22.851035 containerd[1484]: time="2026-04-17T23:58:22.850999297Z" level=info msg="StartContainer for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" returns successfully" Apr 17 23:58:22.973710 kubelet[2518]: I0417 23:58:22.973646 2518 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:58:23.010012 systemd[1]: Created slice kubepods-burstable-pod95a9fc6f_ab62_4925_868d_df306a167f2c.slice - libcontainer container kubepods-burstable-pod95a9fc6f_ab62_4925_868d_df306a167f2c.slice. Apr 17 23:58:23.016792 systemd[1]: Created slice kubepods-burstable-podc36c60a6_b71d_4682_90a0_a4d47ae28cae.slice - libcontainer container kubepods-burstable-podc36c60a6_b71d_4682_90a0_a4d47ae28cae.slice. Apr 17 23:58:23.109145 kubelet[2518]: I0417 23:58:23.108987 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c36c60a6-b71d-4682-90a0-a4d47ae28cae-config-volume\") pod \"coredns-7d764666f9-6p8mq\" (UID: \"c36c60a6-b71d-4682-90a0-a4d47ae28cae\") " pod="kube-system/coredns-7d764666f9-6p8mq" Apr 17 23:58:23.109145 kubelet[2518]: I0417 23:58:23.109043 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a9fc6f-ab62-4925-868d-df306a167f2c-config-volume\") pod \"coredns-7d764666f9-b47c4\" (UID: \"95a9fc6f-ab62-4925-868d-df306a167f2c\") " pod="kube-system/coredns-7d764666f9-b47c4" Apr 17 23:58:23.109145 kubelet[2518]: I0417 23:58:23.109058 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbqw\" (UniqueName: \"kubernetes.io/projected/95a9fc6f-ab62-4925-868d-df306a167f2c-kube-api-access-9lbqw\") pod \"coredns-7d764666f9-b47c4\" (UID: \"95a9fc6f-ab62-4925-868d-df306a167f2c\") " pod="kube-system/coredns-7d764666f9-b47c4" Apr 17 23:58:23.109145 kubelet[2518]: I0417 23:58:23.109084 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47z48\" (UniqueName: \"kubernetes.io/projected/c36c60a6-b71d-4682-90a0-a4d47ae28cae-kube-api-access-47z48\") pod \"coredns-7d764666f9-6p8mq\" (UID: \"c36c60a6-b71d-4682-90a0-a4d47ae28cae\") " pod="kube-system/coredns-7d764666f9-6p8mq" Apr 17 23:58:23.130256 kubelet[2518]: E0417 23:58:23.130156 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:23.324466 kubelet[2518]: E0417 23:58:23.324357 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:23.328356 kubelet[2518]: E0417 23:58:23.328087 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:23.340652 containerd[1484]: time="2026-04-17T23:58:23.340577098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6p8mq,Uid:c36c60a6-b71d-4682-90a0-a4d47ae28cae,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:23.341154 containerd[1484]: time="2026-04-17T23:58:23.341091405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b47c4,Uid:95a9fc6f-ab62-4925-868d-df306a167f2c,Namespace:kube-system,Attempt:0,}" Apr 17 23:58:23.765966 kubelet[2518]: E0417 23:58:23.765782 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:23.781322 kubelet[2518]: I0417 23:58:23.781225 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-w7vjk" podStartSLOduration=1.7782591989999998 podStartE2EDuration="12.781211489s" podCreationTimestamp="2026-04-17 23:58:11 +0000 UTC" firstStartedPulling="2026-04-17 23:58:11.758299231 +0000 UTC m=+8.130629328" lastFinishedPulling="2026-04-17 23:58:22.76125152 +0000 UTC m=+19.133581618" observedRunningTime="2026-04-17 23:58:23.780472726 +0000 UTC m=+20.152802829" watchObservedRunningTime="2026-04-17 23:58:23.781211489 +0000 UTC m=+20.153541587" Apr 17 23:58:24.734433 systemd-networkd[1396]: cilium_host: Link UP Apr 17 23:58:24.734572 systemd-networkd[1396]: cilium_net: Link UP Apr 17 23:58:24.734664 systemd-networkd[1396]: cilium_net: Gained carrier Apr 17 23:58:24.734748 systemd-networkd[1396]: cilium_host: Gained carrier Apr 17 23:58:24.768391 kubelet[2518]: E0417 23:58:24.768348 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:24.814009 systemd-networkd[1396]: cilium_vxlan: Link UP Apr 17 23:58:24.814015 systemd-networkd[1396]: cilium_vxlan: Gained carrier Apr 17 23:58:24.931763 systemd-networkd[1396]: cilium_net: Gained IPv6LL Apr 17 23:58:24.956811 systemd-networkd[1396]: cilium_host: Gained IPv6LL Apr 17 23:58:25.013528 kernel: NET: Registered PF_ALG protocol family Apr 17 23:58:25.550385 systemd-networkd[1396]: lxc_health: Link UP Apr 17 23:58:25.558517 systemd-networkd[1396]: lxc_health: Gained carrier Apr 17 23:58:25.770916 kubelet[2518]: E0417 23:58:25.770860 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:25.950231 systemd-networkd[1396]: lxc86f2b91e7ddf: Link UP Apr 17 23:58:25.956614 kernel: eth0: renamed from tmp618e7 Apr 17 23:58:25.961537 systemd-networkd[1396]: lxc0b1f98632d0c: Link UP Apr 17 23:58:25.972085 systemd-networkd[1396]: lxc86f2b91e7ddf: Gained carrier Apr 17 23:58:25.972706 kernel: eth0: renamed from tmpd7856 Apr 17 23:58:25.980077 systemd-networkd[1396]: lxc0b1f98632d0c: Gained carrier Apr 17 23:58:26.755762 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Apr 17 23:58:26.773159 kubelet[2518]: E0417 23:58:26.773094 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:27.139731 systemd-networkd[1396]: lxc86f2b91e7ddf: Gained IPv6LL Apr 17 23:58:27.203724 systemd-networkd[1396]: lxc0b1f98632d0c: Gained IPv6LL Apr 17 23:58:27.395830 systemd-networkd[1396]: lxc_health: Gained IPv6LL Apr 17 23:58:27.628187 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:42494.service - OpenSSH per-connection server daemon (10.0.0.1:42494). Apr 17 23:58:27.669671 update_engine[1460]: I20260417 23:58:27.669574 1460 update_attempter.cc:509] Updating boot flags... Apr 17 23:58:27.684318 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 42494 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:27.685755 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:27.688828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3381) Apr 17 23:58:27.689921 systemd-logind[1458]: New session 8 of user core. Apr 17 23:58:27.696016 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:58:27.721562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3381) Apr 17 23:58:27.778315 kubelet[2518]: E0417 23:58:27.778272 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:27.833730 sshd[3740]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:27.836338 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:42494.service: Deactivated successfully. Apr 17 23:58:27.837694 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:58:27.838268 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:58:27.839319 systemd-logind[1458]: Removed session 8. Apr 17 23:58:29.124611 containerd[1484]: time="2026-04-17T23:58:29.124522103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:29.124611 containerd[1484]: time="2026-04-17T23:58:29.124576020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:29.124611 containerd[1484]: time="2026-04-17T23:58:29.124587081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:29.125014 containerd[1484]: time="2026-04-17T23:58:29.124646142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:29.137949 containerd[1484]: time="2026-04-17T23:58:29.137720955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:58:29.138409 containerd[1484]: time="2026-04-17T23:58:29.138208568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:58:29.138409 containerd[1484]: time="2026-04-17T23:58:29.138241546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:29.138409 containerd[1484]: time="2026-04-17T23:58:29.138343395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:58:29.152725 systemd[1]: Started cri-containerd-618e74218fdbcd5fc086f959954f8af1b30b6a6416b5875adaa474a06a8eb3d3.scope - libcontainer container 618e74218fdbcd5fc086f959954f8af1b30b6a6416b5875adaa474a06a8eb3d3. Apr 17 23:58:29.157612 systemd[1]: Started cri-containerd-d785680d408fa594296ba4141c5d505c0a5761fb1c6c8679c74b60a45f479703.scope - libcontainer container d785680d408fa594296ba4141c5d505c0a5761fb1c6c8679c74b60a45f479703. Apr 17 23:58:29.163239 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:58:29.167121 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:58:29.196547 containerd[1484]: time="2026-04-17T23:58:29.196445589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6p8mq,Uid:c36c60a6-b71d-4682-90a0-a4d47ae28cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"d785680d408fa594296ba4141c5d505c0a5761fb1c6c8679c74b60a45f479703\"" Apr 17 23:58:29.200713 kubelet[2518]: E0417 23:58:29.200550 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:29.201410 containerd[1484]: time="2026-04-17T23:58:29.200838863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-b47c4,Uid:95a9fc6f-ab62-4925-868d-df306a167f2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"618e74218fdbcd5fc086f959954f8af1b30b6a6416b5875adaa474a06a8eb3d3\"" Apr 17 23:58:29.202851 kubelet[2518]: E0417 23:58:29.202642 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:29.207737 containerd[1484]: time="2026-04-17T23:58:29.207689369Z" level=info msg="CreateContainer within sandbox \"d785680d408fa594296ba4141c5d505c0a5761fb1c6c8679c74b60a45f479703\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:58:29.209742 containerd[1484]: time="2026-04-17T23:58:29.209530265Z" level=info msg="CreateContainer within sandbox \"618e74218fdbcd5fc086f959954f8af1b30b6a6416b5875adaa474a06a8eb3d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:58:29.228567 containerd[1484]: time="2026-04-17T23:58:29.228471274Z" level=info msg="CreateContainer within sandbox \"618e74218fdbcd5fc086f959954f8af1b30b6a6416b5875adaa474a06a8eb3d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84615d5124a1408af6949ea25549f6d67ccb5816ed38141dff5ab24d48cd1450\"" Apr 17 23:58:29.229320 containerd[1484]: time="2026-04-17T23:58:29.229268802Z" level=info msg="StartContainer for \"84615d5124a1408af6949ea25549f6d67ccb5816ed38141dff5ab24d48cd1450\"" Apr 17 23:58:29.239374 containerd[1484]: time="2026-04-17T23:58:29.239307736Z" level=info msg="CreateContainer within sandbox \"d785680d408fa594296ba4141c5d505c0a5761fb1c6c8679c74b60a45f479703\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"062e6dd86f0530b3267f0508a81e4529fd69d3c076fd606419380b321ae55f31\"" Apr 17 23:58:29.240089 containerd[1484]: time="2026-04-17T23:58:29.240068714Z" level=info msg="StartContainer for \"062e6dd86f0530b3267f0508a81e4529fd69d3c076fd606419380b321ae55f31\"" Apr 17 23:58:29.253666 systemd[1]: Started cri-containerd-84615d5124a1408af6949ea25549f6d67ccb5816ed38141dff5ab24d48cd1450.scope - libcontainer container 84615d5124a1408af6949ea25549f6d67ccb5816ed38141dff5ab24d48cd1450. Apr 17 23:58:29.276144 systemd[1]: Started cri-containerd-062e6dd86f0530b3267f0508a81e4529fd69d3c076fd606419380b321ae55f31.scope - libcontainer container 062e6dd86f0530b3267f0508a81e4529fd69d3c076fd606419380b321ae55f31. Apr 17 23:58:29.288709 containerd[1484]: time="2026-04-17T23:58:29.288678616Z" level=info msg="StartContainer for \"84615d5124a1408af6949ea25549f6d67ccb5816ed38141dff5ab24d48cd1450\" returns successfully" Apr 17 23:58:29.306419 containerd[1484]: time="2026-04-17T23:58:29.306347247Z" level=info msg="StartContainer for \"062e6dd86f0530b3267f0508a81e4529fd69d3c076fd606419380b321ae55f31\" returns successfully" Apr 17 23:58:29.785316 kubelet[2518]: E0417 23:58:29.785204 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:29.787413 kubelet[2518]: E0417 23:58:29.787354 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:29.800541 kubelet[2518]: I0417 23:58:29.800407 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-b47c4" podStartSLOduration=18.800388977 podStartE2EDuration="18.800388977s" podCreationTimestamp="2026-04-17 23:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:29.799797015 +0000 UTC m=+26.172127119" watchObservedRunningTime="2026-04-17 23:58:29.800388977 +0000 UTC m=+26.172719095" Apr 17 23:58:30.789605 kubelet[2518]: E0417 23:58:30.789556 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:30.790111 kubelet[2518]: E0417 23:58:30.789712 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:31.792006 kubelet[2518]: E0417 23:58:31.791948 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:31.792006 kubelet[2518]: E0417 23:58:31.792007 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:58:32.848160 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:40324.service - OpenSSH per-connection server daemon (10.0.0.1:40324). Apr 17 23:58:32.885730 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 40324 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:32.887551 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:32.892974 systemd-logind[1458]: New session 9 of user core. Apr 17 23:58:32.903823 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:58:33.021983 sshd[3939]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:33.025151 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:40324.service: Deactivated successfully. Apr 17 23:58:33.026447 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:58:33.027054 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:58:33.028765 systemd-logind[1458]: Removed session 9. Apr 17 23:58:38.033218 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). Apr 17 23:58:38.066087 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:38.068123 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:38.071791 systemd-logind[1458]: New session 10 of user core. Apr 17 23:58:38.081666 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:58:38.180719 sshd[3956]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:38.192860 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:40332.service: Deactivated successfully. Apr 17 23:58:38.194088 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:58:38.195171 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:58:38.203768 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:40344.service - OpenSSH per-connection server daemon (10.0.0.1:40344). Apr 17 23:58:38.204786 systemd-logind[1458]: Removed session 10. Apr 17 23:58:38.229869 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 40344 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:38.231052 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:38.235062 systemd-logind[1458]: New session 11 of user core. Apr 17 23:58:38.241637 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:58:38.376851 sshd[3971]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:38.388478 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:40344.service: Deactivated successfully. Apr 17 23:58:38.390681 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:58:38.392082 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:58:38.413256 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:40360.service - OpenSSH per-connection server daemon (10.0.0.1:40360). Apr 17 23:58:38.417391 systemd-logind[1458]: Removed session 11. Apr 17 23:58:38.475021 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 40360 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:38.476287 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:38.481136 systemd-logind[1458]: New session 12 of user core. Apr 17 23:58:38.489785 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:58:38.599113 sshd[3983]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:38.602166 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:40360.service: Deactivated successfully. Apr 17 23:58:38.603553 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:58:38.604203 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:58:38.604991 systemd-logind[1458]: Removed session 12. Apr 17 23:58:43.613414 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:51312.service - OpenSSH per-connection server daemon (10.0.0.1:51312). Apr 17 23:58:43.646297 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 51312 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:43.647651 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:43.651664 systemd-logind[1458]: New session 13 of user core. Apr 17 23:58:43.667882 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:58:43.785586 sshd[3999]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:43.788851 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:51312.service: Deactivated successfully. Apr 17 23:58:43.790316 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:58:43.791225 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:58:43.792718 systemd-logind[1458]: Removed session 13. Apr 17 23:58:48.797271 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:51316.service - OpenSSH per-connection server daemon (10.0.0.1:51316). Apr 17 23:58:48.828195 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 51316 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:48.829477 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:48.834747 systemd-logind[1458]: New session 14 of user core. Apr 17 23:58:48.849796 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:58:48.949795 kernel: hrtimer: interrupt took 2662713 ns Apr 17 23:58:48.995906 sshd[4013]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:49.006588 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:51316.service: Deactivated successfully. Apr 17 23:58:49.007880 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:58:49.009256 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:58:49.010772 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:51326.service - OpenSSH per-connection server daemon (10.0.0.1:51326). Apr 17 23:58:49.011651 systemd-logind[1458]: Removed session 14. Apr 17 23:58:49.041919 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 51326 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:49.043091 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:49.047039 systemd-logind[1458]: New session 15 of user core. Apr 17 23:58:49.056880 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:58:49.240740 sshd[4027]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:49.256632 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:51326.service: Deactivated successfully. Apr 17 23:58:49.258386 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:58:49.259997 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:58:49.265760 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:51332.service - OpenSSH per-connection server daemon (10.0.0.1:51332). Apr 17 23:58:49.266527 systemd-logind[1458]: Removed session 15. Apr 17 23:58:49.300234 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 51332 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:49.301563 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:49.305367 systemd-logind[1458]: New session 16 of user core. Apr 17 23:58:49.312902 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:58:49.752873 sshd[4040]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:49.760955 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:51332.service: Deactivated successfully. Apr 17 23:58:49.763448 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:58:49.766876 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:58:49.776584 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:34786.service - OpenSSH per-connection server daemon (10.0.0.1:34786). Apr 17 23:58:49.778413 systemd-logind[1458]: Removed session 16. Apr 17 23:58:49.807446 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 34786 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:49.809224 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:49.813320 systemd-logind[1458]: New session 17 of user core. Apr 17 23:58:49.822909 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:58:50.054820 sshd[4058]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:50.067628 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:34786.service: Deactivated successfully. Apr 17 23:58:50.069729 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:58:50.071388 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:58:50.083090 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:34802.service - OpenSSH per-connection server daemon (10.0.0.1:34802). Apr 17 23:58:50.083997 systemd-logind[1458]: Removed session 17. Apr 17 23:58:50.110710 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 34802 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:50.112061 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:50.116030 systemd-logind[1458]: New session 18 of user core. Apr 17 23:58:50.126808 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:58:50.240011 sshd[4072]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:50.243803 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:34802.service: Deactivated successfully. Apr 17 23:58:50.245149 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:58:50.245812 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:58:50.247209 systemd-logind[1458]: Removed session 18. Apr 17 23:58:55.251108 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:34808.service - OpenSSH per-connection server daemon (10.0.0.1:34808). Apr 17 23:58:55.283210 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 34808 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:58:55.284815 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:58:55.288375 systemd-logind[1458]: New session 19 of user core. Apr 17 23:58:55.297670 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:58:55.396154 sshd[4090]: pam_unix(sshd:session): session closed for user core Apr 17 23:58:55.398917 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:34808.service: Deactivated successfully. Apr 17 23:58:55.400313 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:58:55.401023 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:58:55.401816 systemd-logind[1458]: Removed session 19. Apr 17 23:59:00.407092 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:54078.service - OpenSSH per-connection server daemon (10.0.0.1:54078). Apr 17 23:59:00.437388 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 54078 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:59:00.439306 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:00.443091 systemd-logind[1458]: New session 20 of user core. Apr 17 23:59:00.451070 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:59:00.554677 sshd[4104]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:00.556773 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:54078.service: Deactivated successfully. Apr 17 23:59:00.558167 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:59:00.559242 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:59:00.560135 systemd-logind[1458]: Removed session 20. Apr 17 23:59:05.569268 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:54088.service - OpenSSH per-connection server daemon (10.0.0.1:54088). Apr 17 23:59:05.603900 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 54088 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:59:05.605441 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:05.611062 systemd-logind[1458]: New session 21 of user core. Apr 17 23:59:05.625671 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:59:05.736372 sshd[4120]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:05.747209 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:54088.service: Deactivated successfully. Apr 17 23:59:05.749135 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:59:05.750372 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:59:05.751692 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Apr 17 23:59:05.752851 systemd-logind[1458]: Removed session 21. Apr 17 23:59:05.780231 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:59:05.781275 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:05.785229 systemd-logind[1458]: New session 22 of user core. Apr 17 23:59:05.794782 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:59:07.122223 kubelet[2518]: I0417 23:59:07.119276 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6p8mq" podStartSLOduration=56.119263188 podStartE2EDuration="56.119263188s" podCreationTimestamp="2026-04-17 23:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:58:29.827464014 +0000 UTC m=+26.199794130" watchObservedRunningTime="2026-04-17 23:59:07.119263188 +0000 UTC m=+63.491593297" Apr 17 23:59:07.126607 containerd[1484]: time="2026-04-17T23:59:07.126412198Z" level=info msg="StopContainer for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" with timeout 30 (s)" Apr 17 23:59:07.127537 containerd[1484]: time="2026-04-17T23:59:07.127316807Z" level=info msg="Stop container \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" with signal terminated" Apr 17 23:59:07.147570 systemd[1]: cri-containerd-0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf.scope: Deactivated successfully. Apr 17 23:59:07.160475 containerd[1484]: time="2026-04-17T23:59:07.160437964Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:59:07.163324 containerd[1484]: time="2026-04-17T23:59:07.163265290Z" level=info msg="StopContainer for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" with timeout 2 (s)" Apr 17 23:59:07.163727 containerd[1484]: time="2026-04-17T23:59:07.163677308Z" level=info msg="Stop container \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" with signal terminated" Apr 17 23:59:07.170278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf-rootfs.mount: Deactivated successfully. Apr 17 23:59:07.170683 systemd-networkd[1396]: lxc_health: Link DOWN Apr 17 23:59:07.170831 systemd-networkd[1396]: lxc_health: Lost carrier Apr 17 23:59:07.176728 containerd[1484]: time="2026-04-17T23:59:07.176680536Z" level=info msg="shim disconnected" id=0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf namespace=k8s.io Apr 17 23:59:07.176860 containerd[1484]: time="2026-04-17T23:59:07.176833312Z" level=warning msg="cleaning up after shim disconnected" id=0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf namespace=k8s.io Apr 17 23:59:07.176860 containerd[1484]: time="2026-04-17T23:59:07.176859330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:07.187282 systemd[1]: cri-containerd-8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3.scope: Deactivated successfully. Apr 17 23:59:07.187802 systemd[1]: cri-containerd-8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3.scope: Consumed 5.796s CPU time. Apr 17 23:59:07.189362 containerd[1484]: time="2026-04-17T23:59:07.189321616Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:59:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:59:07.194244 containerd[1484]: time="2026-04-17T23:59:07.194129195Z" level=info msg="StopContainer for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" returns successfully" Apr 17 23:59:07.195253 containerd[1484]: time="2026-04-17T23:59:07.195156509Z" level=info msg="StopPodSandbox for \"0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db\"" Apr 17 23:59:07.195383 containerd[1484]: time="2026-04-17T23:59:07.195320494Z" level=info msg="Container to stop \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:59:07.197161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db-shm.mount: Deactivated successfully. Apr 17 23:59:07.202552 systemd[1]: cri-containerd-0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db.scope: Deactivated successfully. Apr 17 23:59:07.207600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3-rootfs.mount: Deactivated successfully. Apr 17 23:59:07.213708 containerd[1484]: time="2026-04-17T23:59:07.213600501Z" level=info msg="shim disconnected" id=8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3 namespace=k8s.io Apr 17 23:59:07.213708 containerd[1484]: time="2026-04-17T23:59:07.213672298Z" level=warning msg="cleaning up after shim disconnected" id=8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3 namespace=k8s.io Apr 17 23:59:07.213708 containerd[1484]: time="2026-04-17T23:59:07.213679827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:07.227231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db-rootfs.mount: Deactivated successfully. Apr 17 23:59:07.232700 containerd[1484]: time="2026-04-17T23:59:07.232626836Z" level=info msg="shim disconnected" id=0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db namespace=k8s.io Apr 17 23:59:07.232977 containerd[1484]: time="2026-04-17T23:59:07.232931046Z" level=warning msg="cleaning up after shim disconnected" id=0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db namespace=k8s.io Apr 17 23:59:07.232977 containerd[1484]: time="2026-04-17T23:59:07.232947459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:07.233387 containerd[1484]: time="2026-04-17T23:59:07.233353604Z" level=info msg="StopContainer for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" returns successfully" Apr 17 23:59:07.234213 containerd[1484]: time="2026-04-17T23:59:07.234002723Z" level=info msg="StopPodSandbox for \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\"" Apr 17 23:59:07.234213 containerd[1484]: time="2026-04-17T23:59:07.234040368Z" level=info msg="Container to stop \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:59:07.234213 containerd[1484]: time="2026-04-17T23:59:07.234054755Z" level=info msg="Container to stop \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:59:07.234213 containerd[1484]: time="2026-04-17T23:59:07.234066345Z" level=info msg="Container to stop \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:59:07.234213 containerd[1484]: time="2026-04-17T23:59:07.234077591Z" level=info msg="Container to stop \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:59:07.234213 containerd[1484]: time="2026-04-17T23:59:07.234089835Z" level=info msg="Container to stop \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:59:07.240873 systemd[1]: cri-containerd-3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e.scope: Deactivated successfully. Apr 17 23:59:07.245172 containerd[1484]: time="2026-04-17T23:59:07.245090725Z" level=info msg="TearDown network for sandbox \"0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db\" successfully" Apr 17 23:59:07.245172 containerd[1484]: time="2026-04-17T23:59:07.245120595Z" level=info msg="StopPodSandbox for \"0ed1fadf36d74f0277c498d912879643b463a7a3f4cad5bc59396ef6e03620db\" returns successfully" Apr 17 23:59:07.266406 containerd[1484]: time="2026-04-17T23:59:07.266236203Z" level=info msg="shim disconnected" id=3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e namespace=k8s.io Apr 17 23:59:07.266406 containerd[1484]: time="2026-04-17T23:59:07.266305358Z" level=warning msg="cleaning up after shim disconnected" id=3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e namespace=k8s.io Apr 17 23:59:07.266406 containerd[1484]: time="2026-04-17T23:59:07.266320018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:07.277564 containerd[1484]: time="2026-04-17T23:59:07.277523288Z" level=info msg="TearDown network for sandbox \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" successfully" Apr 17 23:59:07.277564 containerd[1484]: time="2026-04-17T23:59:07.277556877Z" level=info msg="StopPodSandbox for \"3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e\" returns successfully" Apr 17 23:59:07.403196 kubelet[2518]: I0417 23:59:07.403017 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-lib-modules\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-lib-modules\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403196 kubelet[2518]: I0417 23:59:07.403074 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-etc-cni-netd\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403196 kubelet[2518]: I0417 23:59:07.403108 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-config-path\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403196 kubelet[2518]: I0417 23:59:07.403135 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/45b72dfc-29c1-45d3-9925-c10688b0cc83-kube-api-access-b28h4\" (UniqueName: \"kubernetes.io/projected/45b72dfc-29c1-45d3-9925-c10688b0cc83-kube-api-access-b28h4\") pod \"45b72dfc-29c1-45d3-9925-c10688b0cc83\" (UID: \"45b72dfc-29c1-45d3-9925-c10688b0cc83\") " Apr 17 23:59:07.403196 kubelet[2518]: I0417 23:59:07.403142 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-lib-modules" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.403431 kubelet[2518]: I0417 23:59:07.403158 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/45b72dfc-29c1-45d3-9925-c10688b0cc83-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45b72dfc-29c1-45d3-9925-c10688b0cc83-cilium-config-path\") pod \"45b72dfc-29c1-45d3-9925-c10688b0cc83\" (UID: \"45b72dfc-29c1-45d3-9925-c10688b0cc83\") " Apr 17 23:59:07.403431 kubelet[2518]: I0417 23:59:07.403181 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-kube-api-access-cjscp\" (UniqueName: \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-kube-api-access-cjscp\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403431 kubelet[2518]: I0417 23:59:07.403203 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-hubble-tls\" (UniqueName: \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-hubble-tls\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403431 kubelet[2518]: I0417 23:59:07.403226 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-run\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-run\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403431 kubelet[2518]: I0417 23:59:07.403246 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-hostproc\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-hostproc\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403584 kubelet[2518]: I0417 23:59:07.403269 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-kernel\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403584 kubelet[2518]: I0417 23:59:07.403290 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-xtables-lock\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403584 kubelet[2518]: I0417 23:59:07.403315 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-bpf-maps\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403584 kubelet[2518]: I0417 23:59:07.403341 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cni-path\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cni-path\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403584 kubelet[2518]: I0417 23:59:07.403367 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-cgroup\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403674 kubelet[2518]: I0417 23:59:07.403390 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-net\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403674 kubelet[2518]: I0417 23:59:07.403417 2518 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/36bd0be9-f134-4fb0-80d6-1445a0562501-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36bd0be9-f134-4fb0-80d6-1445a0562501-clustermesh-secrets\") pod \"36bd0be9-f134-4fb0-80d6-1445a0562501\" (UID: \"36bd0be9-f134-4fb0-80d6-1445a0562501\") " Apr 17 23:59:07.403674 kubelet[2518]: I0417 23:59:07.403460 2518 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.405094 kubelet[2518]: I0417 23:59:07.404555 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-hostproc" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.405094 kubelet[2518]: I0417 23:59:07.404817 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-config-path" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:59:07.405094 kubelet[2518]: I0417 23:59:07.404847 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-etc-cni-netd" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.406769 kubelet[2518]: I0417 23:59:07.406754 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-kernel" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.407118 kubelet[2518]: I0417 23:59:07.406836 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-xtables-lock" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.407118 kubelet[2518]: I0417 23:59:07.406851 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-bpf-maps" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.407118 kubelet[2518]: I0417 23:59:07.406856 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-run" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.407118 kubelet[2518]: I0417 23:59:07.406869 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-cgroup" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.407118 kubelet[2518]: I0417 23:59:07.406885 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-net" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.407226 kubelet[2518]: I0417 23:59:07.406861 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cni-path" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:59:07.408707 kubelet[2518]: I0417 23:59:07.408691 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b72dfc-29c1-45d3-9925-c10688b0cc83-cilium-config-path" pod "45b72dfc-29c1-45d3-9925-c10688b0cc83" (UID: "45b72dfc-29c1-45d3-9925-c10688b0cc83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:59:07.409756 kubelet[2518]: I0417 23:59:07.409534 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b72dfc-29c1-45d3-9925-c10688b0cc83-kube-api-access-b28h4" pod "45b72dfc-29c1-45d3-9925-c10688b0cc83" (UID: "45b72dfc-29c1-45d3-9925-c10688b0cc83"). InnerVolumeSpecName "kube-api-access-b28h4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:59:07.409945 kubelet[2518]: I0417 23:59:07.409801 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-hubble-tls" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:59:07.409945 kubelet[2518]: I0417 23:59:07.409820 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bd0be9-f134-4fb0-80d6-1445a0562501-clustermesh-secrets" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:59:07.410239 kubelet[2518]: I0417 23:59:07.410207 2518 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-kube-api-access-cjscp" pod "36bd0be9-f134-4fb0-80d6-1445a0562501" (UID: "36bd0be9-f134-4fb0-80d6-1445a0562501"). InnerVolumeSpecName "kube-api-access-cjscp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503730 2518 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503792 2518 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503806 2518 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503821 2518 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503832 2518 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/36bd0be9-f134-4fb0-80d6-1445a0562501-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503843 2518 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.503828 kubelet[2518]: I0417 23:59:07.503856 2518 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503868 2518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b28h4\" (UniqueName: \"kubernetes.io/projected/45b72dfc-29c1-45d3-9925-c10688b0cc83-kube-api-access-b28h4\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503879 2518 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45b72dfc-29c1-45d3-9925-c10688b0cc83-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503909 2518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cjscp\" (UniqueName: \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-kube-api-access-cjscp\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503958 2518 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/36bd0be9-f134-4fb0-80d6-1445a0562501-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503971 2518 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503981 2518 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.503990 2518 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.504156 kubelet[2518]: I0417 23:59:07.504000 2518 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36bd0be9-f134-4fb0-80d6-1445a0562501-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 17 23:59:07.709597 systemd[1]: Removed slice kubepods-besteffort-pod45b72dfc_29c1_45d3_9925_c10688b0cc83.slice - libcontainer container kubepods-besteffort-pod45b72dfc_29c1_45d3_9925_c10688b0cc83.slice. Apr 17 23:59:07.711145 systemd[1]: Removed slice kubepods-burstable-pod36bd0be9_f134_4fb0_80d6_1445a0562501.slice - libcontainer container kubepods-burstable-pod36bd0be9_f134_4fb0_80d6_1445a0562501.slice. Apr 17 23:59:07.711245 systemd[1]: kubepods-burstable-pod36bd0be9_f134_4fb0_80d6_1445a0562501.slice: Consumed 5.862s CPU time. Apr 17 23:59:07.962699 kubelet[2518]: I0417 23:59:07.962563 2518 scope.go:122] "RemoveContainer" containerID="0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf" Apr 17 23:59:07.982150 containerd[1484]: time="2026-04-17T23:59:07.982046320Z" level=info msg="RemoveContainer for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\"" Apr 17 23:59:07.988521 containerd[1484]: time="2026-04-17T23:59:07.988408331Z" level=info msg="RemoveContainer for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" returns successfully" Apr 17 23:59:07.988767 kubelet[2518]: I0417 23:59:07.988715 2518 scope.go:122] "RemoveContainer" containerID="0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf" Apr 17 23:59:07.999589 containerd[1484]: time="2026-04-17T23:59:07.993263055Z" level=error msg="ContainerStatus for \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\": not found" Apr 17 23:59:08.011739 kubelet[2518]: E0417 23:59:08.011660 2518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\": not found" containerID="0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf" Apr 17 23:59:08.011739 kubelet[2518]: I0417 23:59:08.011716 2518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf"} err="failed to get container status \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ba632494cef7692239015ce8b4125bd280dcd88e8bbe24701c1e1edc96f04bf\": not found" Apr 17 23:59:08.011739 kubelet[2518]: I0417 23:59:08.011758 2518 scope.go:122] "RemoveContainer" containerID="8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3" Apr 17 23:59:08.015871 containerd[1484]: time="2026-04-17T23:59:08.015833020Z" level=info msg="RemoveContainer for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\"" Apr 17 23:59:08.030744 containerd[1484]: time="2026-04-17T23:59:08.030654887Z" level=info msg="RemoveContainer for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" returns successfully" Apr 17 23:59:08.031197 kubelet[2518]: I0417 23:59:08.031078 2518 scope.go:122] "RemoveContainer" containerID="1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec" Apr 17 23:59:08.032376 containerd[1484]: time="2026-04-17T23:59:08.032252023Z" level=info msg="RemoveContainer for \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\"" Apr 17 23:59:08.038364 containerd[1484]: time="2026-04-17T23:59:08.038296587Z" level=info msg="RemoveContainer for \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\" returns successfully" Apr 17 23:59:08.038738 kubelet[2518]: I0417 23:59:08.038717 2518 scope.go:122] "RemoveContainer" containerID="8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c" Apr 17 23:59:08.039901 containerd[1484]: time="2026-04-17T23:59:08.039867912Z" level=info msg="RemoveContainer for \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\"" Apr 17 23:59:08.043250 containerd[1484]: time="2026-04-17T23:59:08.043194660Z" level=info msg="RemoveContainer for \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\" returns successfully" Apr 17 23:59:08.043617 kubelet[2518]: I0417 23:59:08.043576 2518 scope.go:122] "RemoveContainer" containerID="b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519" Apr 17 23:59:08.044705 containerd[1484]: time="2026-04-17T23:59:08.044674590Z" level=info msg="RemoveContainer for \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\"" Apr 17 23:59:08.047301 containerd[1484]: time="2026-04-17T23:59:08.047222784Z" level=info msg="RemoveContainer for \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\" returns successfully" Apr 17 23:59:08.047558 kubelet[2518]: I0417 23:59:08.047473 2518 scope.go:122] "RemoveContainer" containerID="538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548" Apr 17 23:59:08.048526 containerd[1484]: time="2026-04-17T23:59:08.048463228Z" level=info msg="RemoveContainer for \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\"" Apr 17 23:59:08.050928 containerd[1484]: time="2026-04-17T23:59:08.050851583Z" level=info msg="RemoveContainer for \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\" returns successfully" Apr 17 23:59:08.051268 kubelet[2518]: I0417 23:59:08.051234 2518 scope.go:122] "RemoveContainer" containerID="8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3" Apr 17 23:59:08.051463 containerd[1484]: time="2026-04-17T23:59:08.051433817Z" level=error msg="ContainerStatus for \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\": not found" Apr 17 23:59:08.051570 kubelet[2518]: E0417 23:59:08.051546 2518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\": not found" containerID="8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3" Apr 17 23:59:08.051644 kubelet[2518]: I0417 23:59:08.051583 2518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3"} err="failed to get container status \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a70d291b5c94e5da629b8c6753d0c52fdfdb3e9912b892556abe82cf53237d3\": not found" Apr 17 23:59:08.051644 kubelet[2518]: I0417 23:59:08.051602 2518 scope.go:122] "RemoveContainer" containerID="1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec" Apr 17 23:59:08.051793 containerd[1484]: time="2026-04-17T23:59:08.051759986Z" level=error msg="ContainerStatus for \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\": not found" Apr 17 23:59:08.051945 kubelet[2518]: E0417 23:59:08.051922 2518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\": not found" containerID="1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec" Apr 17 23:59:08.051981 kubelet[2518]: I0417 23:59:08.051951 2518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec"} err="failed to get container status \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d14236d62435cf1a74fce46e85596fec81a18757b3e11677e0a92f5704eccec\": not found" Apr 17 23:59:08.051981 kubelet[2518]: I0417 23:59:08.051965 2518 scope.go:122] "RemoveContainer" containerID="8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c" Apr 17 23:59:08.052139 containerd[1484]: time="2026-04-17T23:59:08.052097642Z" level=error msg="ContainerStatus for \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\": not found" Apr 17 23:59:08.052233 kubelet[2518]: E0417 23:59:08.052211 2518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\": not found" containerID="8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c" Apr 17 23:59:08.052252 kubelet[2518]: I0417 23:59:08.052234 2518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c"} err="failed to get container status \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8eea183e0ef13f9903346b7518001a57484357e6cc9fca731c8b8d7f0956b69c\": not found" Apr 17 23:59:08.052252 kubelet[2518]: I0417 23:59:08.052242 2518 scope.go:122] "RemoveContainer" containerID="b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519" Apr 17 23:59:08.052383 containerd[1484]: time="2026-04-17T23:59:08.052357261Z" level=error msg="ContainerStatus for \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\": not found" Apr 17 23:59:08.052528 kubelet[2518]: E0417 23:59:08.052457 2518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\": not found" containerID="b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519" Apr 17 23:59:08.052554 kubelet[2518]: I0417 23:59:08.052529 2518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519"} err="failed to get container status \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9375871fb962677845cd39ca156948c478aed03d37beefbb311896a71b6b519\": not found" Apr 17 23:59:08.052554 kubelet[2518]: I0417 23:59:08.052537 2518 scope.go:122] "RemoveContainer" containerID="538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548" Apr 17 23:59:08.052693 containerd[1484]: time="2026-04-17T23:59:08.052671939Z" level=error msg="ContainerStatus for \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\": not found" Apr 17 23:59:08.052816 kubelet[2518]: E0417 23:59:08.052794 2518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\": not found" containerID="538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548" Apr 17 23:59:08.052840 kubelet[2518]: I0417 23:59:08.052817 2518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548"} err="failed to get container status \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\": rpc error: code = NotFound desc = an error occurred when try to find container \"538f7bf4565af5466400e137bc7d56229c605c01012c6d4716256388bdcb0548\": not found" Apr 17 23:59:08.142120 systemd[1]: var-lib-kubelet-pods-45b72dfc\x2d29c1\x2d45d3\x2d9925\x2dc10688b0cc83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db28h4.mount: Deactivated successfully. Apr 17 23:59:08.142253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e-rootfs.mount: Deactivated successfully. Apr 17 23:59:08.142319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a247e6ac484f292e5948c686b8cda2e127ed78b79f49ce1904b90b12c99737e-shm.mount: Deactivated successfully. Apr 17 23:59:08.142391 systemd[1]: var-lib-kubelet-pods-36bd0be9\x2df134\x2d4fb0\x2d80d6\x2d1445a0562501-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjscp.mount: Deactivated successfully. Apr 17 23:59:08.142457 systemd[1]: var-lib-kubelet-pods-36bd0be9\x2df134\x2d4fb0\x2d80d6\x2d1445a0562501-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:59:08.142597 systemd[1]: var-lib-kubelet-pods-36bd0be9\x2df134\x2d4fb0\x2d80d6\x2d1445a0562501-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:59:08.743206 kubelet[2518]: E0417 23:59:08.743125 2518 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:59:09.094423 sshd[4134]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:09.105825 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:54096.service: Deactivated successfully. Apr 17 23:59:09.108293 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:59:09.110216 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:59:09.125566 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:54104.service - OpenSSH per-connection server daemon (10.0.0.1:54104). Apr 17 23:59:09.126613 systemd-logind[1458]: Removed session 22. Apr 17 23:59:09.153244 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 54104 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:59:09.154712 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:09.161177 systemd-logind[1458]: New session 23 of user core. Apr 17 23:59:09.170941 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:59:09.634583 sshd[4294]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:09.644764 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:54104.service: Deactivated successfully. Apr 17 23:59:09.646784 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:59:09.652061 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:59:09.669172 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:54176.service - OpenSSH per-connection server daemon (10.0.0.1:54176). Apr 17 23:59:09.671609 systemd-logind[1458]: Removed session 23. Apr 17 23:59:09.678796 systemd[1]: Created slice kubepods-burstable-poddfc8db9c_f072_4962_856c_dc3fc634c5fd.slice - libcontainer container kubepods-burstable-poddfc8db9c_f072_4962_856c_dc3fc634c5fd.slice. Apr 17 23:59:09.698353 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 54176 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:59:09.700006 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:09.703748 systemd-logind[1458]: New session 24 of user core. Apr 17 23:59:09.704866 kubelet[2518]: I0417 23:59:09.704833 2518 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="36bd0be9-f134-4fb0-80d6-1445a0562501" path="/var/lib/kubelet/pods/36bd0be9-f134-4fb0-80d6-1445a0562501/volumes" Apr 17 23:59:09.705321 kubelet[2518]: I0417 23:59:09.705282 2518 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="45b72dfc-29c1-45d3-9925-c10688b0cc83" path="/var/lib/kubelet/pods/45b72dfc-29c1-45d3-9925-c10688b0cc83/volumes" Apr 17 23:59:09.711924 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:59:09.770178 sshd[4307]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:09.785937 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:54176.service: Deactivated successfully. Apr 17 23:59:09.789420 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:59:09.792384 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:59:09.801745 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:54184.service - OpenSSH per-connection server daemon (10.0.0.1:54184). Apr 17 23:59:09.805129 systemd-logind[1458]: Removed session 24. Apr 17 23:59:09.818182 kubelet[2518]: I0417 23:59:09.818120 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-host-proc-sys-net\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818182 kubelet[2518]: I0417 23:59:09.818167 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-xtables-lock\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818182 kubelet[2518]: I0417 23:59:09.818190 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqhkr\" (UniqueName: \"kubernetes.io/projected/dfc8db9c-f072-4962-856c-dc3fc634c5fd-kube-api-access-nqhkr\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818182 kubelet[2518]: I0417 23:59:09.818204 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-hostproc\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818182 kubelet[2518]: I0417 23:59:09.818216 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-lib-modules\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818182 kubelet[2518]: I0417 23:59:09.818226 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dfc8db9c-f072-4962-856c-dc3fc634c5fd-cilium-ipsec-secrets\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818998 kubelet[2518]: I0417 23:59:09.818237 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-cni-path\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818998 kubelet[2518]: I0417 23:59:09.818247 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfc8db9c-f072-4962-856c-dc3fc634c5fd-cilium-config-path\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818998 kubelet[2518]: I0417 23:59:09.818262 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-cilium-run\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818998 kubelet[2518]: I0417 23:59:09.818383 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-cilium-cgroup\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818998 kubelet[2518]: I0417 23:59:09.818438 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfc8db9c-f072-4962-856c-dc3fc634c5fd-clustermesh-secrets\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.818998 kubelet[2518]: I0417 23:59:09.818574 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-etc-cni-netd\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.819180 kubelet[2518]: I0417 23:59:09.818613 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-bpf-maps\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.819180 kubelet[2518]: I0417 23:59:09.818642 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfc8db9c-f072-4962-856c-dc3fc634c5fd-host-proc-sys-kernel\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.819180 kubelet[2518]: I0417 23:59:09.818660 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfc8db9c-f072-4962-856c-dc3fc634c5fd-hubble-tls\") pod \"cilium-4tt8v\" (UID: \"dfc8db9c-f072-4962-856c-dc3fc634c5fd\") " pod="kube-system/cilium-4tt8v" Apr 17 23:59:09.833794 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 54184 ssh2: RSA SHA256:GLFBd4cyjSUF1rJUSzzLxNbiwtS18VIiv6uKthKOXTk Apr 17 23:59:09.836170 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:59:09.839884 systemd-logind[1458]: New session 25 of user core. Apr 17 23:59:09.855421 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:59:09.988032 kubelet[2518]: E0417 23:59:09.987844 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:09.988745 containerd[1484]: time="2026-04-17T23:59:09.988659574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tt8v,Uid:dfc8db9c-f072-4962-856c-dc3fc634c5fd,Namespace:kube-system,Attempt:0,}" Apr 17 23:59:10.028936 containerd[1484]: time="2026-04-17T23:59:10.028674094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:59:10.028936 containerd[1484]: time="2026-04-17T23:59:10.028813679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:59:10.028936 containerd[1484]: time="2026-04-17T23:59:10.028823654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:59:10.030241 containerd[1484]: time="2026-04-17T23:59:10.028940051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:59:10.058035 systemd[1]: Started cri-containerd-bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95.scope - libcontainer container bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95. Apr 17 23:59:10.083422 containerd[1484]: time="2026-04-17T23:59:10.083320416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tt8v,Uid:dfc8db9c-f072-4962-856c-dc3fc634c5fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\"" Apr 17 23:59:10.085225 kubelet[2518]: E0417 23:59:10.085116 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:10.094204 containerd[1484]: time="2026-04-17T23:59:10.094015943Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:59:10.115027 containerd[1484]: time="2026-04-17T23:59:10.114723812Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8\"" Apr 17 23:59:10.118378 containerd[1484]: time="2026-04-17T23:59:10.115846180Z" level=info msg="StartContainer for \"9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8\"" Apr 17 23:59:10.153157 systemd[1]: Started cri-containerd-9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8.scope - libcontainer container 9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8. Apr 17 23:59:10.183538 containerd[1484]: time="2026-04-17T23:59:10.182794495Z" level=info msg="StartContainer for \"9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8\" returns successfully" Apr 17 23:59:10.192251 systemd[1]: cri-containerd-9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8.scope: Deactivated successfully. Apr 17 23:59:10.227262 containerd[1484]: time="2026-04-17T23:59:10.227162879Z" level=info msg="shim disconnected" id=9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8 namespace=k8s.io Apr 17 23:59:10.227262 containerd[1484]: time="2026-04-17T23:59:10.227247938Z" level=warning msg="cleaning up after shim disconnected" id=9e1b2dbf514e6c543418df2c448e010c80f9d80f01a115c9ce0d33175d8eebf8 namespace=k8s.io Apr 17 23:59:10.227262 containerd[1484]: time="2026-04-17T23:59:10.227259229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:10.985419 kubelet[2518]: E0417 23:59:10.985344 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:10.992191 containerd[1484]: time="2026-04-17T23:59:10.992067753Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:59:11.004802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925556487.mount: Deactivated successfully. Apr 17 23:59:11.008841 containerd[1484]: time="2026-04-17T23:59:11.008671956Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060\"" Apr 17 23:59:11.009771 containerd[1484]: time="2026-04-17T23:59:11.009630883Z" level=info msg="StartContainer for \"1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060\"" Apr 17 23:59:11.050898 systemd[1]: Started cri-containerd-1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060.scope - libcontainer container 1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060. Apr 17 23:59:11.078668 containerd[1484]: time="2026-04-17T23:59:11.078578983Z" level=info msg="StartContainer for \"1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060\" returns successfully" Apr 17 23:59:11.084353 systemd[1]: cri-containerd-1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060.scope: Deactivated successfully. Apr 17 23:59:11.106852 containerd[1484]: time="2026-04-17T23:59:11.106779393Z" level=info msg="shim disconnected" id=1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060 namespace=k8s.io Apr 17 23:59:11.106852 containerd[1484]: time="2026-04-17T23:59:11.106849038Z" level=warning msg="cleaning up after shim disconnected" id=1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060 namespace=k8s.io Apr 17 23:59:11.107072 containerd[1484]: time="2026-04-17T23:59:11.106885505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:11.961834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e4c358808a6780506b1d2af7b7e672c489a6cbc30f61dc7b3e6b7e0fc854060-rootfs.mount: Deactivated successfully. Apr 17 23:59:11.990104 kubelet[2518]: E0417 23:59:11.990060 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:11.996407 containerd[1484]: time="2026-04-17T23:59:11.996356107Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:59:12.012013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777057308.mount: Deactivated successfully. Apr 17 23:59:12.016187 containerd[1484]: time="2026-04-17T23:59:12.016148060Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8\"" Apr 17 23:59:12.016816 containerd[1484]: time="2026-04-17T23:59:12.016774597Z" level=info msg="StartContainer for \"b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8\"" Apr 17 23:59:12.053965 systemd[1]: Started cri-containerd-b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8.scope - libcontainer container b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8. Apr 17 23:59:12.082046 containerd[1484]: time="2026-04-17T23:59:12.081961639Z" level=info msg="StartContainer for \"b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8\" returns successfully" Apr 17 23:59:12.083569 systemd[1]: cri-containerd-b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8.scope: Deactivated successfully. Apr 17 23:59:12.108553 containerd[1484]: time="2026-04-17T23:59:12.108422004Z" level=info msg="shim disconnected" id=b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8 namespace=k8s.io Apr 17 23:59:12.108553 containerd[1484]: time="2026-04-17T23:59:12.108474367Z" level=warning msg="cleaning up after shim disconnected" id=b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8 namespace=k8s.io Apr 17 23:59:12.108553 containerd[1484]: time="2026-04-17T23:59:12.108520769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:12.120652 containerd[1484]: time="2026-04-17T23:59:12.120561478Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:59:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:59:12.961931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b48b82207487e5bc5c565e86e721a5701dbc4ddacc392f6c913690f4df05c7a8-rootfs.mount: Deactivated successfully. Apr 17 23:59:12.995065 kubelet[2518]: E0417 23:59:12.994982 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:13.002080 containerd[1484]: time="2026-04-17T23:59:13.001961538Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:59:13.018172 containerd[1484]: time="2026-04-17T23:59:13.018091197Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b\"" Apr 17 23:59:13.018921 containerd[1484]: time="2026-04-17T23:59:13.018803676Z" level=info msg="StartContainer for \"333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b\"" Apr 17 23:59:13.042834 systemd[1]: Started cri-containerd-333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b.scope - libcontainer container 333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b. Apr 17 23:59:13.066925 systemd[1]: cri-containerd-333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b.scope: Deactivated successfully. Apr 17 23:59:13.071644 containerd[1484]: time="2026-04-17T23:59:13.071394923Z" level=info msg="StartContainer for \"333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b\" returns successfully" Apr 17 23:59:13.105148 containerd[1484]: time="2026-04-17T23:59:13.105063565Z" level=info msg="shim disconnected" id=333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b namespace=k8s.io Apr 17 23:59:13.105148 containerd[1484]: time="2026-04-17T23:59:13.105116927Z" level=warning msg="cleaning up after shim disconnected" id=333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b namespace=k8s.io Apr 17 23:59:13.105148 containerd[1484]: time="2026-04-17T23:59:13.105123703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:59:13.744428 kubelet[2518]: E0417 23:59:13.744366 2518 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:59:13.961886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-333fecda9133bdb59a89a0d7a07d41f1ed575ea7ac942281a734004a0620e91b-rootfs.mount: Deactivated successfully. Apr 17 23:59:14.000815 kubelet[2518]: E0417 23:59:14.000611 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:14.006791 containerd[1484]: time="2026-04-17T23:59:14.006656307Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:59:14.021097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651549428.mount: Deactivated successfully. Apr 17 23:59:14.023833 containerd[1484]: time="2026-04-17T23:59:14.023751653Z" level=info msg="CreateContainer within sandbox \"bdab6d7b10f9bfc7baaf2123faeccdbececfe4b2994da73c9f6b9bacd7144a95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e\"" Apr 17 23:59:14.024375 containerd[1484]: time="2026-04-17T23:59:14.024340896Z" level=info msg="StartContainer for \"b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e\"" Apr 17 23:59:14.053902 systemd[1]: Started cri-containerd-b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e.scope - libcontainer container b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e. Apr 17 23:59:14.084294 containerd[1484]: time="2026-04-17T23:59:14.084182936Z" level=info msg="StartContainer for \"b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e\" returns successfully" Apr 17 23:59:14.327532 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:59:14.962574 systemd[1]: run-containerd-runc-k8s.io-b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e-runc.hm6GbB.mount: Deactivated successfully. Apr 17 23:59:15.011640 kubelet[2518]: E0417 23:59:15.011049 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:15.026368 kubelet[2518]: I0417 23:59:15.026188 2518 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-4tt8v" podStartSLOduration=6.026178319 podStartE2EDuration="6.026178319s" podCreationTimestamp="2026-04-17 23:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:59:15.026173003 +0000 UTC m=+71.398503112" watchObservedRunningTime="2026-04-17 23:59:15.026178319 +0000 UTC m=+71.398508423" Apr 17 23:59:15.777996 kubelet[2518]: I0417 23:59:15.777448 2518 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:59:15Z","lastTransitionTime":"2026-04-17T23:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:59:16.013278 kubelet[2518]: E0417 23:59:16.013177 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:17.326032 systemd-networkd[1396]: lxc_health: Link UP Apr 17 23:59:17.335623 systemd-networkd[1396]: lxc_health: Gained carrier Apr 17 23:59:17.985947 kubelet[2518]: E0417 23:59:17.985899 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:18.018274 kubelet[2518]: E0417 23:59:18.018018 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:18.385135 systemd[1]: run-containerd-runc-k8s.io-b0282e46800e5ff04e5ec3ed62fbee5e0bc3c478a0b441bbc0e414572eb9818e-runc.94gvcZ.mount: Deactivated successfully. Apr 17 23:59:18.982617 systemd-networkd[1396]: lxc_health: Gained IPv6LL Apr 17 23:59:19.021191 kubelet[2518]: E0417 23:59:19.021145 2518 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:59:22.650183 sshd[4315]: pam_unix(sshd:session): session closed for user core Apr 17 23:59:22.654246 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:54184.service: Deactivated successfully. Apr 17 23:59:22.656447 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:59:22.657731 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:59:22.658820 systemd-logind[1458]: Removed session 25.