Apr 24 23:39:37.816932 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:39:37.816950 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:39:37.816960 kernel: BIOS-provided physical RAM map: Apr 24 23:39:37.816965 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 24 23:39:37.816970 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 24 23:39:37.816975 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 23:39:37.816982 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 24 23:39:37.816987 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 24 23:39:37.816992 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 23:39:37.816999 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 23:39:37.817004 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:39:37.817009 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 23:39:37.817014 kernel: NX (Execute Disable) protection: active Apr 24 23:39:37.817020 kernel: APIC: Static calls initialized Apr 24 23:39:37.817026 kernel: SMBIOS 2.8 present. Apr 24 23:39:37.817033 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 24 23:39:37.817039 kernel: Hypervisor detected: KVM Apr 24 23:39:37.817045 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:39:37.817050 kernel: kvm-clock: using sched offset of 3155257646 cycles Apr 24 23:39:37.817056 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:39:37.817062 kernel: tsc: Detected 2793.438 MHz processor Apr 24 23:39:37.817068 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:39:37.817074 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:39:37.817080 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 24 23:39:37.817087 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 23:39:37.817092 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:39:37.817098 kernel: Using GB pages for direct mapping Apr 24 23:39:37.817104 kernel: ACPI: Early table checksum verification disabled Apr 24 23:39:37.817110 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 24 23:39:37.817116 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817121 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817127 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817133 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 24 23:39:37.817140 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817145 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817151 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817156 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:39:37.817162 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 24 23:39:37.817168 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 24 23:39:37.817174 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 24 23:39:37.817183 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 24 23:39:37.817189 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 24 23:39:37.817195 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 24 23:39:37.817201 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 24 23:39:37.817207 kernel: No NUMA configuration found Apr 24 23:39:37.817213 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 24 23:39:37.817219 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 24 23:39:37.817226 kernel: Zone ranges: Apr 24 23:39:37.817231 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:39:37.817236 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 24 23:39:37.817241 kernel: Normal empty Apr 24 23:39:37.817245 kernel: Movable zone start for each node Apr 24 23:39:37.817250 kernel: Early memory node ranges Apr 24 23:39:37.817255 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 23:39:37.817260 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 24 23:39:37.817265 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 24 23:39:37.817270 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:39:37.817277 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 23:39:37.817282 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 24 23:39:37.817287 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:39:37.817292 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:39:37.817297 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:39:37.817302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:39:37.817307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:39:37.817312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:39:37.817317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:39:37.817324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:39:37.817328 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:39:37.817333 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:39:37.817338 kernel: TSC deadline timer available Apr 24 23:39:37.817343 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 24 23:39:37.817348 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:39:37.817353 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 23:39:37.817358 kernel: kvm-guest: setup PV sched yield Apr 24 23:39:37.817363 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 23:39:37.817370 kernel: Booting paravirtualized kernel on KVM Apr 24 23:39:37.817375 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:39:37.817380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 23:39:37.817385 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 24 23:39:37.817391 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 24 23:39:37.817395 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 23:39:37.817400 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:39:37.817405 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:39:37.817411 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:39:37.817418 kernel: random: crng init done Apr 24 23:39:37.817423 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:39:37.817428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:39:37.817433 kernel: Fallback order for Node 0: 0 Apr 24 23:39:37.817438 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 24 23:39:37.817442 kernel: Policy zone: DMA32 Apr 24 23:39:37.817447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:39:37.817453 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 24 23:39:37.817459 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 23:39:37.817464 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:39:37.817469 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:39:37.817474 kernel: Dynamic Preempt: voluntary Apr 24 23:39:37.817479 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:39:37.817485 kernel: rcu: RCU event tracing is enabled. Apr 24 23:39:37.817490 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 23:39:37.817495 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:39:37.817500 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:39:37.817505 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:39:37.817511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:39:37.817517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 23:39:37.817521 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 23:39:37.817526 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:39:37.817531 kernel: Console: colour VGA+ 80x25 Apr 24 23:39:37.817536 kernel: printk: console [ttyS0] enabled Apr 24 23:39:37.817541 kernel: ACPI: Core revision 20230628 Apr 24 23:39:37.817546 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:39:37.817551 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:39:37.817558 kernel: x2apic enabled Apr 24 23:39:37.817563 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:39:37.817568 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 23:39:37.817573 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 23:39:37.817578 kernel: kvm-guest: setup PV IPIs Apr 24 23:39:37.817583 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:39:37.817588 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:39:37.817600 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 23:39:37.817605 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:39:37.817611 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 23:39:37.817616 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 23:39:37.817623 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:39:37.817628 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:39:37.817634 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:39:37.817639 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:39:37.817645 kernel: RETBleed: Vulnerable Apr 24 23:39:37.817652 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:39:37.817657 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:39:37.817663 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:39:37.817683 kernel: active return thunk: its_return_thunk Apr 24 23:39:37.817752 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:39:37.817758 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:39:37.817763 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:39:37.817769 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:39:37.817774 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:39:37.817782 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:39:37.817788 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:39:37.817793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:39:37.817799 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:39:37.817804 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:39:37.817809 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:39:37.817815 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:39:37.817820 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:39:37.817826 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:39:37.817833 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:39:37.817838 kernel: landlock: Up and running. Apr 24 23:39:37.817844 kernel: SELinux: Initializing. Apr 24 23:39:37.817849 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:39:37.817854 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:39:37.817860 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 23:39:37.817866 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:39:37.817871 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:39:37.817878 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 23:39:37.817884 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 23:39:37.817889 kernel: signal: max sigframe size: 3632 Apr 24 23:39:37.817895 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:39:37.817901 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:39:37.817906 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:39:37.817912 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:39:37.817917 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:39:37.817923 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 23:39:37.817928 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 23:39:37.817935 kernel: smpboot: Max logical packages: 1 Apr 24 23:39:37.817941 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 23:39:37.817946 kernel: devtmpfs: initialized Apr 24 23:39:37.817952 kernel: x86/mm: Memory block size: 128MB Apr 24 23:39:37.817957 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:39:37.817963 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 23:39:37.817968 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:39:37.817974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:39:37.817979 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:39:37.817986 kernel: audit: type=2000 audit(1777073978.061:1): state=initialized audit_enabled=0 res=1 Apr 24 23:39:37.817991 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:39:37.817997 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:39:37.818002 kernel: cpuidle: using governor menu Apr 24 23:39:37.818008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:39:37.818013 kernel: dca service started, version 1.12.1 Apr 24 23:39:37.818019 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 24 23:39:37.818024 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 23:39:37.818030 kernel: PCI: Using configuration type 1 for base access Apr 24 23:39:37.818037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:39:37.818043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:39:37.818048 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:39:37.818054 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:39:37.818059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:39:37.818065 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:39:37.818070 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:39:37.818076 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:39:37.818083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:39:37.818088 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:39:37.818094 kernel: ACPI: Interpreter enabled Apr 24 23:39:37.818099 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 23:39:37.818104 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:39:37.818110 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:39:37.818115 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:39:37.818121 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:39:37.818126 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:39:37.818230 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:39:37.818293 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:39:37.818348 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:39:37.818355 kernel: PCI host bridge to bus 0000:00 Apr 24 23:39:37.818416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:39:37.818465 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:39:37.818514 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:39:37.818564 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 24 23:39:37.818613 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 23:39:37.818661 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 24 23:39:37.818748 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:39:37.818814 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:39:37.818876 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 24 23:39:37.818935 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 24 23:39:37.818989 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 24 23:39:37.819043 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 24 23:39:37.819097 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:39:37.819158 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 24 23:39:37.819214 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 24 23:39:37.819269 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 24 23:39:37.819325 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 23:39:37.819384 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 24 23:39:37.819440 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 24 23:39:37.819495 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 24 23:39:37.819549 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 23:39:37.819609 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 24 23:39:37.819680 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 24 23:39:37.819759 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 24 23:39:37.819813 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 24 23:39:37.819868 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 24 23:39:37.819926 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:39:37.819980 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:39:37.820043 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:39:37.820097 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 24 23:39:37.820154 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 24 23:39:37.820212 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:39:37.820266 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 24 23:39:37.820273 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:39:37.820279 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:39:37.820284 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:39:37.820289 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:39:37.820296 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:39:37.820302 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:39:37.820307 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:39:37.820313 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:39:37.820318 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:39:37.820324 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:39:37.820329 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:39:37.820335 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:39:37.820340 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:39:37.820347 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:39:37.820352 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:39:37.820358 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:39:37.820363 kernel: iommu: Default domain type: Translated Apr 24 23:39:37.820369 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:39:37.820374 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:39:37.820380 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:39:37.820385 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 24 23:39:37.820391 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 24 23:39:37.820445 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:39:37.820499 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:39:37.820553 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:39:37.820560 kernel: vgaarb: loaded Apr 24 23:39:37.820566 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:39:37.820571 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:39:37.820577 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:39:37.820582 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:39:37.820588 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:39:37.820595 kernel: pnp: PnP ACPI init Apr 24 23:39:37.820654 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 23:39:37.820661 kernel: pnp: PnP ACPI: found 6 devices Apr 24 23:39:37.820684 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:39:37.820707 kernel: NET: Registered PF_INET protocol family Apr 24 23:39:37.820713 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:39:37.820718 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:39:37.820724 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:39:37.820732 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:39:37.820737 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:39:37.820743 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:39:37.820749 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:39:37.820754 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:39:37.820760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:39:37.820765 kernel: NET: Registered PF_XDP protocol family Apr 24 23:39:37.820819 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:39:37.820868 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:39:37.820920 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:39:37.820969 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 24 23:39:37.821016 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 23:39:37.821064 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 24 23:39:37.821071 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:39:37.821077 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:39:37.821083 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 23:39:37.821088 kernel: Initialise system trusted keyrings Apr 24 23:39:37.821096 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:39:37.821101 kernel: Key type asymmetric registered Apr 24 23:39:37.821106 kernel: Asymmetric key parser 'x509' registered Apr 24 23:39:37.821112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:39:37.821117 kernel: io scheduler mq-deadline registered Apr 24 23:39:37.821123 kernel: io scheduler kyber registered Apr 24 23:39:37.821128 kernel: io scheduler bfq registered Apr 24 23:39:37.821134 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:39:37.821140 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:39:37.821147 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:39:37.821152 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 23:39:37.821158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:39:37.821163 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:39:37.821169 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:39:37.821174 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:39:37.821180 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:39:37.821235 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 23:39:37.821244 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:39:37.821295 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 23:39:37.821346 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T23:39:37 UTC (1777073977) Apr 24 23:39:37.821397 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:39:37.821404 kernel: intel_pstate: CPU model not supported Apr 24 23:39:37.821410 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:39:37.821415 kernel: Segment Routing with IPv6 Apr 24 23:39:37.821421 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:39:37.821426 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:39:37.821433 kernel: Key type dns_resolver registered Apr 24 23:39:37.821438 kernel: IPI shorthand broadcast: enabled Apr 24 23:39:37.821444 kernel: sched_clock: Marking stable (590004950, 161147301)->(826687682, -75535431) Apr 24 23:39:37.821450 kernel: registered taskstats version 1 Apr 24 23:39:37.821455 kernel: Loading compiled-in X.509 certificates Apr 24 23:39:37.821461 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:39:37.821466 kernel: Key type .fscrypt registered Apr 24 23:39:37.821471 kernel: Key type fscrypt-provisioning registered Apr 24 23:39:37.821477 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:39:37.821484 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:39:37.821489 kernel: ima: No architecture policies found Apr 24 23:39:37.821495 kernel: clk: Disabling unused clocks Apr 24 23:39:37.821500 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:39:37.821506 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:39:37.821511 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:39:37.821517 kernel: Run /init as init process Apr 24 23:39:37.821522 kernel: with arguments: Apr 24 23:39:37.821527 kernel: /init Apr 24 23:39:37.821534 kernel: with environment: Apr 24 23:39:37.821540 kernel: HOME=/ Apr 24 23:39:37.821545 kernel: TERM=linux Apr 24 23:39:37.821552 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:39:37.821560 systemd[1]: Detected virtualization kvm. Apr 24 23:39:37.821566 systemd[1]: Detected architecture x86-64. Apr 24 23:39:37.821571 systemd[1]: Running in initrd. Apr 24 23:39:37.821577 systemd[1]: No hostname configured, using default hostname. Apr 24 23:39:37.821584 systemd[1]: Hostname set to . Apr 24 23:39:37.821590 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:39:37.821595 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:39:37.821601 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:39:37.821607 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:39:37.821613 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:39:37.821619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:39:37.821625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:39:37.821633 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:39:37.821647 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:39:37.821654 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:39:37.821659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:39:37.821680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:39:37.821702 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:39:37.821709 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:39:37.821715 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:39:37.821721 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:39:37.821727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:39:37.821733 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:39:37.821739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:39:37.821745 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:39:37.821753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:39:37.821759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:39:37.821765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:39:37.821773 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:39:37.821779 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:39:37.821785 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:39:37.821791 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:39:37.821797 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:39:37.821802 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:39:37.821810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:39:37.821816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:39:37.821822 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:39:37.821838 systemd-journald[194]: Collecting audit messages is disabled. Apr 24 23:39:37.821855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:39:37.821861 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:39:37.821871 systemd-journald[194]: Journal started Apr 24 23:39:37.821886 systemd-journald[194]: Runtime Journal (/run/log/journal/bc6003f48d354132b973512ad9928c6e) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:39:37.823713 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:39:37.824057 systemd-modules-load[195]: Inserted module 'overlay' Apr 24 23:39:37.901119 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:39:37.901147 kernel: Bridge firewalling registered Apr 24 23:39:37.846347 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 24 23:39:37.900269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:39:37.901728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:39:37.913832 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:39:37.915874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:39:37.919602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:39:37.920530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:39:37.928728 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:39:37.932295 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:39:37.935577 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:39:37.938410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:39:37.941891 dracut-cmdline[223]: dracut-dracut-053 Apr 24 23:39:37.942565 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:39:37.942057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:39:37.945909 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:39:37.954614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:39:37.961841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:39:37.973256 systemd-resolved[238]: Positive Trust Anchors: Apr 24 23:39:37.973279 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:39:37.973304 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:39:37.975128 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 24 23:39:37.975796 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:39:37.977624 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:39:38.014745 kernel: SCSI subsystem initialized Apr 24 23:39:38.022730 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:39:38.031720 kernel: iscsi: registered transport (tcp) Apr 24 23:39:38.048725 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:39:38.048761 kernel: QLogic iSCSI HBA Driver Apr 24 23:39:38.078203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:39:38.090816 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:39:38.109973 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:39:38.110028 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:39:38.111291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:39:38.147719 kernel: raid6: avx512x4 gen() 46435 MB/s Apr 24 23:39:38.164711 kernel: raid6: avx512x2 gen() 45896 MB/s Apr 24 23:39:38.181735 kernel: raid6: avx512x1 gen() 45541 MB/s Apr 24 23:39:38.198716 kernel: raid6: avx2x4 gen() 37749 MB/s Apr 24 23:39:38.215712 kernel: raid6: avx2x2 gen() 37453 MB/s Apr 24 23:39:38.233088 kernel: raid6: avx2x1 gen() 28565 MB/s Apr 24 23:39:38.233101 kernel: raid6: using algorithm avx512x4 gen() 46435 MB/s Apr 24 23:39:38.251085 kernel: raid6: .... xor() 10250 MB/s, rmw enabled Apr 24 23:39:38.251322 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:39:38.268725 kernel: xor: automatically using best checksumming function avx Apr 24 23:39:38.413746 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:39:38.422483 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:39:38.435047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:39:38.444220 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 24 23:39:38.446782 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:39:38.450978 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:39:38.465818 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Apr 24 23:39:38.488222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:39:38.508807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:39:38.536012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:39:38.544880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:39:38.556453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:39:38.558385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:39:38.562904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:39:38.564446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:39:38.572793 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:39:38.578718 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:39:38.579711 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 23:39:38.582032 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:39:38.588640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:39:38.590307 kernel: libata version 3.00 loaded. Apr 24 23:39:38.589291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:39:38.593961 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:39:38.610661 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:39:38.610715 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 23:39:38.610829 kernel: AES CTR mode by8 optimization enabled Apr 24 23:39:38.610838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:39:38.610845 kernel: GPT:9289727 != 19775487 Apr 24 23:39:38.610852 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:39:38.610859 kernel: GPT:9289727 != 19775487 Apr 24 23:39:38.610865 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:39:38.610872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:39:38.610882 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:39:38.610967 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:39:38.596929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:39:38.597048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:39:38.621099 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:39:38.621213 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:39:38.621288 kernel: scsi host0: ahci Apr 24 23:39:38.602324 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:39:38.631764 kernel: scsi host1: ahci Apr 24 23:39:38.631866 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (472) Apr 24 23:39:38.631875 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 24 23:39:38.616925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:39:38.637080 kernel: scsi host2: ahci Apr 24 23:39:38.637238 kernel: scsi host3: ahci Apr 24 23:39:38.637363 kernel: scsi host4: ahci Apr 24 23:39:38.638739 kernel: scsi host5: ahci Apr 24 23:39:38.638846 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 24 23:39:38.638855 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 24 23:39:38.638866 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 24 23:39:38.638873 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 24 23:39:38.638880 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 24 23:39:38.638886 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 24 23:39:38.643724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 23:39:38.708460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:39:38.714481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 23:39:38.717821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:39:38.720549 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 23:39:38.721778 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 23:39:38.745893 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:39:38.748023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:39:38.752905 disk-uuid[569]: Primary Header is updated. Apr 24 23:39:38.752905 disk-uuid[569]: Secondary Entries is updated. Apr 24 23:39:38.752905 disk-uuid[569]: Secondary Header is updated. Apr 24 23:39:38.756952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:39:38.767272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:39:38.950878 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:39:38.951220 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:39:38.951229 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:39:38.952721 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 23:39:38.953732 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 23:39:38.954723 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:39:38.955730 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 23:39:38.956811 kernel: ata3.00: applying bridge limits Apr 24 23:39:38.957728 kernel: ata3.00: configured for UDMA/100 Apr 24 23:39:38.959778 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:39:38.999969 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 23:39:39.000360 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:39:39.016746 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:39:39.766721 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 23:39:39.767172 disk-uuid[570]: The operation has completed successfully. Apr 24 23:39:39.786022 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:39:39.786110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:39:39.803852 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:39:39.806834 sh[597]: Success Apr 24 23:39:39.818725 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:39:39.844474 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:39:39.856997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:39:39.860740 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:39:39.869391 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:39:39.869415 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:39:39.869424 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:39:39.870773 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:39:39.872806 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:39:39.876355 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:39:39.877427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:39:39.900833 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:39:39.901915 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:39:39.915624 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:39:39.915657 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:39:39.915667 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:39:39.919743 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:39:39.925362 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:39:39.927776 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:39:39.933116 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:39:39.937842 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:39:39.979630 ignition[702]: Ignition 2.19.0 Apr 24 23:39:39.979643 ignition[702]: Stage: fetch-offline Apr 24 23:39:39.979667 ignition[702]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:39:39.979674 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:39:39.979779 ignition[702]: parsed url from cmdline: "" Apr 24 23:39:39.979781 ignition[702]: no config URL provided Apr 24 23:39:39.979785 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:39:39.979790 ignition[702]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:39:39.979808 ignition[702]: op(1): [started] loading QEMU firmware config module Apr 24 23:39:39.979812 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 23:39:39.995433 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:39:39.997975 ignition[702]: op(1): [finished] loading QEMU firmware config module Apr 24 23:39:40.020172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:39:40.036581 systemd-networkd[785]: lo: Link UP Apr 24 23:39:40.036628 systemd-networkd[785]: lo: Gained carrier Apr 24 23:39:40.037488 systemd-networkd[785]: Enumeration completed Apr 24 23:39:40.037580 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:39:40.037977 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:39:40.037980 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:39:40.038776 systemd-networkd[785]: eth0: Link UP Apr 24 23:39:40.038778 systemd-networkd[785]: eth0: Gained carrier Apr 24 23:39:40.038784 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:39:40.041453 systemd[1]: Reached target network.target - Network. Apr 24 23:39:40.059750 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:39:40.103143 ignition[702]: parsing config with SHA512: f1af0598e4bce0455fa33cec817ade1b811dbc6b5e3451805feab8130ff4fee73568e25d5fc4bab40ca6c5a768453d293367d756a0e3848dc6defa7cf75eddc4 Apr 24 23:39:40.106091 unknown[702]: fetched base config from "system" Apr 24 23:39:40.106103 unknown[702]: fetched user config from "qemu" Apr 24 23:39:40.106384 ignition[702]: fetch-offline: fetch-offline passed Apr 24 23:39:40.107811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:39:40.106428 ignition[702]: Ignition finished successfully Apr 24 23:39:40.109949 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 23:39:40.118958 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:39:40.130671 ignition[789]: Ignition 2.19.0 Apr 24 23:39:40.130731 ignition[789]: Stage: kargs Apr 24 23:39:40.132804 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:39:40.130863 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:39:40.130869 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:39:40.131463 ignition[789]: kargs: kargs passed Apr 24 23:39:40.131492 ignition[789]: Ignition finished successfully Apr 24 23:39:40.139990 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:39:40.149310 ignition[797]: Ignition 2.19.0 Apr 24 23:39:40.149324 ignition[797]: Stage: disks Apr 24 23:39:40.149446 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:39:40.149453 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:39:40.150089 ignition[797]: disks: disks passed Apr 24 23:39:40.152315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:39:40.150121 ignition[797]: Ignition finished successfully Apr 24 23:39:40.154899 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:39:40.155202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:39:40.158226 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:39:40.160626 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:39:40.163140 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:39:40.175835 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:39:40.184749 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:39:40.188051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:39:40.203820 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:39:40.279734 kernel: EXT4-fs (vda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:39:40.279878 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:39:40.282061 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:39:40.292797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:39:40.295484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:39:40.296174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:39:40.296203 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:39:40.296218 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:39:40.303843 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:39:40.305024 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:39:40.312604 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 24 23:39:40.314723 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:39:40.314741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:39:40.316608 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:39:40.319734 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:39:40.320190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:39:40.342910 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:39:40.347174 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:39:40.351386 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:39:40.355065 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:39:40.418445 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:39:40.426886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:39:40.428290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:39:40.438744 kernel: BTRFS info (device vda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:39:40.450748 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:39:40.461156 ignition[931]: INFO : Ignition 2.19.0 Apr 24 23:39:40.461156 ignition[931]: INFO : Stage: mount Apr 24 23:39:40.463488 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:39:40.463488 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:39:40.463488 ignition[931]: INFO : mount: mount passed Apr 24 23:39:40.463488 ignition[931]: INFO : Ignition finished successfully Apr 24 23:39:40.470030 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:39:40.478849 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:39:40.868141 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:39:40.881911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:39:40.890756 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Apr 24 23:39:40.890791 kernel: BTRFS info (device vda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:39:40.892770 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:39:40.892782 kernel: BTRFS info (device vda6): using free space tree Apr 24 23:39:40.896728 kernel: BTRFS info (device vda6): auto enabling async discard Apr 24 23:39:40.897360 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:39:40.918281 ignition[961]: INFO : Ignition 2.19.0 Apr 24 23:39:40.918281 ignition[961]: INFO : Stage: files Apr 24 23:39:40.920972 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:39:40.920972 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:39:40.920972 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:39:40.920972 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:39:40.920972 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:39:40.929874 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:39:40.929874 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:39:40.929874 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:39:40.929874 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:39:40.929874 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:39:40.927525 unknown[961]: wrote ssh authorized keys file for user: core Apr 24 23:39:40.960942 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:39:41.007043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:39:41.007043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:39:41.011802 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 24 23:39:41.303357 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:39:41.663077 systemd-networkd[785]: eth0: Gained IPv6LL Apr 24 23:39:41.834389 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:39:41.834389 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:39:41.838969 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 24 23:39:42.060655 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:39:42.345971 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 23:39:42.345971 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 24 23:39:42.350394 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:39:42.352942 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:39:42.352942 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 24 23:39:42.352942 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 24 23:39:42.352942 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:39:42.360704 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 23:39:42.360704 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 24 23:39:42.364665 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 23:39:42.382605 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:39:42.385621 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 23:39:42.387630 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 23:39:42.387630 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:39:42.387630 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:39:42.387630 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:39:42.387630 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:39:42.387630 ignition[961]: INFO : files: files passed Apr 24 23:39:42.387630 ignition[961]: INFO : Ignition finished successfully Apr 24 23:39:42.391621 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:39:42.402914 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:39:42.405926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:39:42.410473 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:39:42.410554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:39:42.414784 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 23:39:42.416506 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:39:42.416506 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:39:42.420379 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:39:42.422933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:39:42.424658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:39:42.449845 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:39:42.467522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:39:42.467610 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:39:42.470348 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:39:42.472953 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:39:42.475402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:39:42.478581 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:39:42.490948 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:39:42.494033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:39:42.507161 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:39:42.510041 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:39:42.510634 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:39:42.513413 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:39:42.513502 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:39:42.517674 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:39:42.518363 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:39:42.522066 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:39:42.524207 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:39:42.526531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:39:42.529312 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:39:42.532158 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:39:42.534364 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:39:42.537309 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:39:42.539764 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:39:42.542083 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:39:42.542175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:39:42.546017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:39:42.548543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:39:42.549181 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:39:42.549317 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:39:42.553011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:39:42.553093 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:39:42.558091 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:39:42.558193 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:39:42.560788 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:39:42.562998 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:39:42.567757 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:39:42.568500 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:39:42.571775 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:39:42.574090 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:39:42.574157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:39:42.576085 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:39:42.576143 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:39:42.578271 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:39:42.578348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:39:42.580596 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:39:42.580667 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:39:42.603914 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:39:42.604465 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:39:42.604559 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:39:42.607722 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:39:42.610624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:39:42.619030 ignition[1016]: INFO : Ignition 2.19.0 Apr 24 23:39:42.619030 ignition[1016]: INFO : Stage: umount Apr 24 23:39:42.610774 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:39:42.624210 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:39:42.624210 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 23:39:42.624210 ignition[1016]: INFO : umount: umount passed Apr 24 23:39:42.624210 ignition[1016]: INFO : Ignition finished successfully Apr 24 23:39:42.614767 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:39:42.614982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:39:42.622222 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:39:42.622302 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:39:42.625185 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:39:42.625509 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:39:42.625584 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:39:42.628537 systemd[1]: Stopped target network.target - Network. Apr 24 23:39:42.630819 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:39:42.630866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:39:42.634317 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:39:42.634352 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:39:42.636883 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:39:42.636915 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:39:42.640298 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:39:42.640329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:39:42.643090 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:39:42.645357 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:39:42.658768 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 24 23:39:42.659616 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:39:42.659743 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:39:42.660866 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:39:42.660950 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:39:42.664265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:39:42.664299 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:39:42.684012 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:39:42.687051 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:39:42.687119 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:39:42.692762 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:39:42.694192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:39:42.697455 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:39:42.697518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:39:42.700348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:39:42.700387 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:39:42.703483 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:39:42.706823 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:39:42.706912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:39:42.716616 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:39:42.718035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:39:42.721161 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:39:42.721258 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:39:42.727202 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:39:42.727329 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:39:42.730842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:39:42.730889 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:39:42.733604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:39:42.733634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:39:42.734371 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:39:42.734412 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:39:42.738757 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:39:42.738801 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:39:42.744455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:39:42.744499 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:39:42.761901 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:39:42.762458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:39:42.762498 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:39:42.765447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:39:42.765478 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:39:42.768907 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:39:42.768979 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:39:42.772348 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:39:42.775614 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:39:42.785527 systemd[1]: Switching root. Apr 24 23:39:42.809316 systemd-journald[194]: Journal stopped Apr 24 23:39:43.446017 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 24 23:39:43.447265 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:39:43.447282 kernel: SELinux: policy capability open_perms=1 Apr 24 23:39:43.447292 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:39:43.447300 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:39:43.447307 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:39:43.447314 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:39:43.447322 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:39:43.447329 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:39:43.447337 kernel: audit: type=1403 audit(1777073982.917:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:39:43.447349 systemd[1]: Successfully loaded SELinux policy in 31.611ms. Apr 24 23:39:43.447366 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.773ms. Apr 24 23:39:43.447379 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:39:43.447388 systemd[1]: Detected virtualization kvm. Apr 24 23:39:43.447396 systemd[1]: Detected architecture x86-64. Apr 24 23:39:43.447404 systemd[1]: Detected first boot. Apr 24 23:39:43.447412 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:39:43.447421 zram_generator::config[1059]: No configuration found. Apr 24 23:39:43.447430 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:39:43.447438 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:39:43.447447 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:39:43.447456 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:39:43.447464 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:39:43.447472 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:39:43.447480 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:39:43.447487 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:39:43.447495 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:39:43.447503 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:39:43.447511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:39:43.447521 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:39:43.447529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:39:43.447536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:39:43.447544 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:39:43.447553 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:39:43.447561 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:39:43.447569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:39:43.447577 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:39:43.447584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:39:43.447594 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:39:43.447601 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:39:43.447612 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:39:43.447620 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:39:43.447627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:39:43.447635 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:39:43.447643 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:39:43.447651 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:39:43.447661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:39:43.447669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:39:43.447677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:39:43.447685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:39:43.447713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:39:43.447721 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:39:43.447728 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:39:43.447736 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:39:43.447767 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:39:43.447777 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:39:43.447785 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:39:43.447793 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:39:43.447801 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:39:43.447809 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:39:43.447817 systemd[1]: Reached target machines.target - Containers. Apr 24 23:39:43.447824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:39:43.447832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:39:43.447841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:39:43.447849 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:39:43.447857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:39:43.447865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:39:43.447873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:39:43.447881 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:39:43.447889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:39:43.447896 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:39:43.447905 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:39:43.447913 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:39:43.447921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:39:43.447928 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:39:43.447936 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:39:43.447944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:39:43.447951 kernel: fuse: init (API version 7.39) Apr 24 23:39:43.447958 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:39:43.447966 kernel: loop: module loaded Apr 24 23:39:43.447973 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:39:43.447982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:39:43.447990 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:39:43.448015 systemd-journald[1136]: Collecting audit messages is disabled. Apr 24 23:39:43.448035 systemd[1]: Stopped verity-setup.service. Apr 24 23:39:43.448044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:39:43.448053 systemd-journald[1136]: Journal started Apr 24 23:39:43.448071 systemd-journald[1136]: Runtime Journal (/run/log/journal/bc6003f48d354132b973512ad9928c6e) is 6.0M, max 48.4M, 42.3M free. Apr 24 23:39:43.215617 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:39:43.229027 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 23:39:43.229349 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:39:43.451733 kernel: ACPI: bus type drm_connector registered Apr 24 23:39:43.451777 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:39:43.454208 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:39:43.455619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:39:43.457100 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:39:43.458452 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:39:43.459942 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:39:43.461685 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:39:43.463486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:39:43.465407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:39:43.467432 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:39:43.467605 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:39:43.469507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:39:43.469677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:39:43.471582 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:39:43.471799 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:39:43.473570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:39:43.473929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:39:43.475902 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:39:43.476046 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:39:43.477620 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:39:43.477842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:39:43.479387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:39:43.481049 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:39:43.482927 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:39:43.492424 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:39:43.503995 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:39:43.506307 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:39:43.507762 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:39:43.507798 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:39:43.509798 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:39:43.512303 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:39:43.514558 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:39:43.515944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:39:43.517590 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:39:43.519860 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:39:43.521350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:39:43.523263 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:39:43.524871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:39:43.525637 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:39:43.530544 systemd-journald[1136]: Time spent on flushing to /var/log/journal/bc6003f48d354132b973512ad9928c6e is 11.562ms for 951 entries. Apr 24 23:39:43.530544 systemd-journald[1136]: System Journal (/var/log/journal/bc6003f48d354132b973512ad9928c6e) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:39:43.551107 systemd-journald[1136]: Received client request to flush runtime journal. Apr 24 23:39:43.551133 kernel: loop0: detected capacity change from 0 to 140768 Apr 24 23:39:43.530875 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:39:43.534889 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:39:43.538110 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:39:43.540061 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:39:43.543049 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:39:43.548876 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:39:43.551387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:39:43.554884 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:39:43.561270 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:39:43.572927 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:39:43.577925 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:39:43.579862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:39:43.588782 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:39:43.589873 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:39:43.592739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:39:43.594868 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:39:43.595337 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:39:43.602879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:39:43.615260 kernel: loop1: detected capacity change from 0 to 142488 Apr 24 23:39:43.622043 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 24 23:39:43.622055 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Apr 24 23:39:43.626411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:39:43.664722 kernel: loop2: detected capacity change from 0 to 217752 Apr 24 23:39:43.699724 kernel: loop3: detected capacity change from 0 to 140768 Apr 24 23:39:43.709709 kernel: loop4: detected capacity change from 0 to 142488 Apr 24 23:39:43.718735 kernel: loop5: detected capacity change from 0 to 217752 Apr 24 23:39:43.724286 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 23:39:43.724569 (sd-merge)[1197]: Merged extensions into '/usr'. Apr 24 23:39:43.728523 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:39:43.728533 systemd[1]: Reloading... Apr 24 23:39:43.751253 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:39:43.763777 zram_generator::config[1222]: No configuration found. Apr 24 23:39:43.833280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:39:43.861589 systemd[1]: Reloading finished in 132 ms. Apr 24 23:39:43.892297 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:39:43.894030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:39:43.895779 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:39:43.918900 systemd[1]: Starting ensure-sysext.service... Apr 24 23:39:43.920726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:39:43.923239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:39:43.926158 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:39:43.926177 systemd[1]: Reloading... Apr 24 23:39:43.934599 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:39:43.934854 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:39:43.935343 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:39:43.935512 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 24 23:39:43.935560 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Apr 24 23:39:43.937836 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:39:43.937851 systemd-tmpfiles[1263]: Skipping /boot Apr 24 23:39:43.942655 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:39:43.942681 systemd-tmpfiles[1263]: Skipping /boot Apr 24 23:39:43.944253 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Apr 24 23:39:43.968731 zram_generator::config[1286]: No configuration found. Apr 24 23:39:44.010762 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1312) Apr 24 23:39:44.022813 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:39:44.034486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:39:44.047776 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:39:44.050101 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:39:44.058957 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:39:44.059134 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:39:44.059235 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:39:44.074725 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:39:44.088873 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:39:44.089014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 23:39:44.090794 systemd[1]: Reloading finished in 164 ms. Apr 24 23:39:44.131790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:39:44.133845 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:39:44.176932 systemd[1]: Finished ensure-sysext.service. Apr 24 23:39:44.178956 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:39:44.190973 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:39:44.202845 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:39:44.205142 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:39:44.206933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:39:44.210029 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:39:44.212680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:39:44.214773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:39:44.217356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:39:44.219378 lvm[1364]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:39:44.219732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:39:44.221827 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:39:44.222441 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:39:44.225151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:39:44.228841 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:39:44.231827 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:39:44.236614 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:39:44.238874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:39:44.241604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:39:44.243066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:39:44.243552 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:39:44.245479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:39:44.245567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:39:44.247280 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:39:44.247363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:39:44.248992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:39:44.249090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:39:44.251194 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:39:44.251296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:39:44.252876 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:39:44.256713 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:39:44.259741 augenrules[1395]: No rules Apr 24 23:39:44.261342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:39:44.263244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:39:44.270912 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:39:44.272558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:39:44.272744 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:39:44.273528 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:39:44.274314 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:39:44.276997 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:39:44.278774 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:39:44.279223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:39:44.281378 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:39:44.288914 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:39:44.295915 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:39:44.349239 systemd-networkd[1379]: lo: Link UP Apr 24 23:39:44.349246 systemd-networkd[1379]: lo: Gained carrier Apr 24 23:39:44.350095 systemd-networkd[1379]: Enumeration completed Apr 24 23:39:44.350475 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:39:44.350478 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:39:44.351005 systemd-networkd[1379]: eth0: Link UP Apr 24 23:39:44.351008 systemd-networkd[1379]: eth0: Gained carrier Apr 24 23:39:44.351016 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:39:44.354212 systemd-resolved[1381]: Positive Trust Anchors: Apr 24 23:39:44.354238 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:39:44.354263 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:39:44.357261 systemd-resolved[1381]: Defaulting to hostname 'linux'. Apr 24 23:39:44.365768 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 23:39:44.366198 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Apr 24 23:39:45.631533 systemd-resolved[1381]: Clock change detected. Flushing caches. Apr 24 23:39:45.631564 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 23:39:45.631590 systemd-timesyncd[1382]: Initial clock synchronization to Fri 2026-04-24 23:39:45.631497 UTC. Apr 24 23:39:45.633419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:39:45.635105 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:39:45.636689 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:39:45.638157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:39:45.639793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:39:45.642326 systemd[1]: Reached target network.target - Network. Apr 24 23:39:45.643508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:39:45.644999 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:39:45.646382 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:39:45.647954 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:39:45.649680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:39:45.651247 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:39:45.651278 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:39:45.652421 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:39:45.653787 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:39:45.655165 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:39:45.656724 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:39:45.658417 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:39:45.660764 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:39:45.669198 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:39:45.671630 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:39:45.673501 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:39:45.675020 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:39:45.676326 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:39:45.677572 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:39:45.677603 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:39:45.678313 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:39:45.680368 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:39:45.683176 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:39:45.685158 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:39:45.686555 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:39:45.687599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:39:45.690778 jq[1429]: false Apr 24 23:39:45.692621 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:39:45.694630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:39:45.698001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:39:45.703497 extend-filesystems[1430]: Found loop3 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found loop4 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found loop5 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found sr0 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda1 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda2 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda3 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found usr Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda4 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda6 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda7 Apr 24 23:39:45.705979 extend-filesystems[1430]: Found vda9 Apr 24 23:39:45.705979 extend-filesystems[1430]: Checking size of /dev/vda9 Apr 24 23:39:45.703772 dbus-daemon[1428]: [system] SELinux support is enabled Apr 24 23:39:45.703597 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:39:45.729594 extend-filesystems[1430]: Resized partition /dev/vda9 Apr 24 23:39:45.704386 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:39:45.734765 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 23:39:45.734787 update_engine[1442]: I20260424 23:39:45.719269 1442 main.cc:92] Flatcar Update Engine starting Apr 24 23:39:45.734787 update_engine[1442]: I20260424 23:39:45.720161 1442 update_check_scheduler.cc:74] Next update check in 7m9s Apr 24 23:39:45.735004 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:39:45.704981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:39:45.705626 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:39:45.738366 jq[1445]: true Apr 24 23:39:45.709222 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:39:45.710390 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:39:45.738668 jq[1457]: true Apr 24 23:39:45.716769 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:39:45.716906 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:39:45.717087 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:39:45.717180 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:39:45.719687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:39:45.719921 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:39:45.727861 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:39:45.727895 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:39:45.729796 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:39:45.729809 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:39:45.735226 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:39:45.736315 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:39:45.745949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1306) Apr 24 23:39:45.750633 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:39:45.762573 tar[1449]: linux-amd64/LICENSE Apr 24 23:39:45.762573 tar[1449]: linux-amd64/helm Apr 24 23:39:45.774885 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:39:45.774914 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:39:45.775398 systemd-logind[1438]: New seat seat0. Apr 24 23:39:45.776790 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:39:45.801476 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 23:39:45.806263 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:39:45.817644 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 23:39:45.817644 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 23:39:45.817644 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 23:39:45.823445 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Apr 24 23:39:45.818773 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:39:45.818905 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:39:45.824997 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:39:45.825888 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:39:45.828120 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:39:45.884558 containerd[1458]: time="2026-04-24T23:39:45.884417488Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:39:45.901653 containerd[1458]: time="2026-04-24T23:39:45.901622148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.905697 containerd[1458]: time="2026-04-24T23:39:45.905656091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:39:45.905697 containerd[1458]: time="2026-04-24T23:39:45.905687098Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:39:45.905697 containerd[1458]: time="2026-04-24T23:39:45.905699303Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:39:45.905815 containerd[1458]: time="2026-04-24T23:39:45.905799309Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:39:45.905833 containerd[1458]: time="2026-04-24T23:39:45.905818483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.905870 containerd[1458]: time="2026-04-24T23:39:45.905854662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:39:45.905888 containerd[1458]: time="2026-04-24T23:39:45.905870066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906005 containerd[1458]: time="2026-04-24T23:39:45.905976842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906005 containerd[1458]: time="2026-04-24T23:39:45.905997961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906034 containerd[1458]: time="2026-04-24T23:39:45.906007680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906034 containerd[1458]: time="2026-04-24T23:39:45.906014190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906077 containerd[1458]: time="2026-04-24T23:39:45.906062957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906224 containerd[1458]: time="2026-04-24T23:39:45.906197710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906303 containerd[1458]: time="2026-04-24T23:39:45.906287727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:39:45.906319 containerd[1458]: time="2026-04-24T23:39:45.906303630Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:39:45.906367 containerd[1458]: time="2026-04-24T23:39:45.906353535Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:39:45.906401 containerd[1458]: time="2026-04-24T23:39:45.906388296Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:39:45.911135 containerd[1458]: time="2026-04-24T23:39:45.911112604Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:39:45.911201 containerd[1458]: time="2026-04-24T23:39:45.911146139Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:39:45.911201 containerd[1458]: time="2026-04-24T23:39:45.911157176Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:39:45.911201 containerd[1458]: time="2026-04-24T23:39:45.911167211Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:39:45.911201 containerd[1458]: time="2026-04-24T23:39:45.911177209Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:39:45.911306 containerd[1458]: time="2026-04-24T23:39:45.911257630Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:39:45.911445 containerd[1458]: time="2026-04-24T23:39:45.911404390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:39:45.911539 containerd[1458]: time="2026-04-24T23:39:45.911511213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:39:45.911572 containerd[1458]: time="2026-04-24T23:39:45.911545436Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:39:45.911572 containerd[1458]: time="2026-04-24T23:39:45.911555711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:39:45.911572 containerd[1458]: time="2026-04-24T23:39:45.911564952Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911574602Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911583471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911592195Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911601827Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911610893Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911619587Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911627548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911641378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911650 containerd[1458]: time="2026-04-24T23:39:45.911649768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911663339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911672483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911680381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911692331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911700916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911709587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911717899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911727619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911735276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911743090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911751469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911760769Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911774498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911782649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.911971 containerd[1458]: time="2026-04-24T23:39:45.911790085Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911831719Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911843928Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911851570Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911859485Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911866118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911874261Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911883776Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:39:45.912183 containerd[1458]: time="2026-04-24T23:39:45.911890846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:39:45.912283 containerd[1458]: time="2026-04-24T23:39:45.912066167Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:39:45.912283 containerd[1458]: time="2026-04-24T23:39:45.912112422Z" level=info msg="Connect containerd service" Apr 24 23:39:45.912283 containerd[1458]: time="2026-04-24T23:39:45.912135589Z" level=info msg="using legacy CRI server" Apr 24 23:39:45.912283 containerd[1458]: time="2026-04-24T23:39:45.912141008Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:39:45.912283 containerd[1458]: time="2026-04-24T23:39:45.912199949Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:39:45.912625 containerd[1458]: time="2026-04-24T23:39:45.912603243Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:39:45.912825 containerd[1458]: time="2026-04-24T23:39:45.912731792Z" level=info msg="Start subscribing containerd event" Apr 24 23:39:45.912904 containerd[1458]: time="2026-04-24T23:39:45.912826560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:39:45.912968 containerd[1458]: time="2026-04-24T23:39:45.912951023Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:39:45.912988 containerd[1458]: time="2026-04-24T23:39:45.912876766Z" level=info msg="Start recovering state" Apr 24 23:39:45.913025 containerd[1458]: time="2026-04-24T23:39:45.913012009Z" level=info msg="Start event monitor" Apr 24 23:39:45.913041 containerd[1458]: time="2026-04-24T23:39:45.913026388Z" level=info msg="Start snapshots syncer" Apr 24 23:39:45.913041 containerd[1458]: time="2026-04-24T23:39:45.913033460Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:39:45.913041 containerd[1458]: time="2026-04-24T23:39:45.913038481Z" level=info msg="Start streaming server" Apr 24 23:39:45.913105 containerd[1458]: time="2026-04-24T23:39:45.913092504Z" level=info msg="containerd successfully booted in 0.029350s" Apr 24 23:39:45.913169 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:39:45.956742 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:39:45.973592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:39:45.979687 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:39:45.984629 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:39:45.984764 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:39:46.003696 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:39:46.010720 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:39:46.013697 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:39:46.015993 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:39:46.017866 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:39:46.177599 tar[1449]: linux-amd64/README.md Apr 24 23:39:46.195544 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:39:47.599875 systemd-networkd[1379]: eth0: Gained IPv6LL Apr 24 23:39:47.602314 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:39:47.604574 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:39:47.614672 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 23:39:47.617071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:39:47.619209 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:39:47.631353 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 23:39:47.631538 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 23:39:47.633267 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:39:47.635039 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:39:48.218908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:39:48.220767 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:39:48.222291 systemd[1]: Startup finished in 700ms (kernel) + 5.255s (initrd) + 4.069s (userspace) = 10.024s. Apr 24 23:39:48.222737 (kubelet)[1539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:39:48.570759 kubelet[1539]: E0424 23:39:48.570546 1539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:39:48.573090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:39:48.573208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:39:51.983835 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:39:51.984769 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:42996.service - OpenSSH per-connection server daemon (10.0.0.1:42996). Apr 24 23:39:52.022744 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 42996 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.024074 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.030713 systemd-logind[1438]: New session 1 of user core. Apr 24 23:39:52.031475 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:39:52.047734 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:39:52.056652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:39:52.058559 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:39:52.064574 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:39:52.138552 systemd[1556]: Queued start job for default target default.target. Apr 24 23:39:52.151385 systemd[1556]: Created slice app.slice - User Application Slice. Apr 24 23:39:52.151424 systemd[1556]: Reached target paths.target - Paths. Apr 24 23:39:52.151435 systemd[1556]: Reached target timers.target - Timers. Apr 24 23:39:52.152634 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:39:52.161588 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:39:52.161658 systemd[1556]: Reached target sockets.target - Sockets. Apr 24 23:39:52.161667 systemd[1556]: Reached target basic.target - Basic System. Apr 24 23:39:52.161691 systemd[1556]: Reached target default.target - Main User Target. Apr 24 23:39:52.161710 systemd[1556]: Startup finished in 92ms. Apr 24 23:39:52.162011 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:39:52.163322 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:39:52.227675 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:43010.service - OpenSSH per-connection server daemon (10.0.0.1:43010). Apr 24 23:39:52.266643 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 43010 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.267749 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.270988 systemd-logind[1438]: New session 2 of user core. Apr 24 23:39:52.280696 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:39:52.332717 sshd[1567]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:52.346265 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:43010.service: Deactivated successfully. Apr 24 23:39:52.347342 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:39:52.348343 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:39:52.349264 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:43016.service - OpenSSH per-connection server daemon (10.0.0.1:43016). Apr 24 23:39:52.349835 systemd-logind[1438]: Removed session 2. Apr 24 23:39:52.383750 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 43016 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.384656 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.387765 systemd-logind[1438]: New session 3 of user core. Apr 24 23:39:52.393594 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:39:52.441307 sshd[1574]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:52.454684 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:43016.service: Deactivated successfully. Apr 24 23:39:52.455956 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:39:52.457132 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:39:52.466710 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). Apr 24 23:39:52.467473 systemd-logind[1438]: Removed session 3. Apr 24 23:39:52.498032 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.498998 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.501972 systemd-logind[1438]: New session 4 of user core. Apr 24 23:39:52.512588 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:39:52.563944 sshd[1581]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:52.576529 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:43022.service: Deactivated successfully. Apr 24 23:39:52.577559 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:39:52.578487 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:39:52.579351 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Apr 24 23:39:52.579904 systemd-logind[1438]: Removed session 4. Apr 24 23:39:52.614255 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.615650 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.618967 systemd-logind[1438]: New session 5 of user core. Apr 24 23:39:52.633632 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:39:52.687787 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:39:52.688055 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:39:52.706742 sudo[1591]: pam_unix(sudo:session): session closed for user root Apr 24 23:39:52.708562 sshd[1588]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:52.721451 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:43024.service: Deactivated successfully. Apr 24 23:39:52.722730 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:39:52.723777 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:39:52.724834 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:43040.service - OpenSSH per-connection server daemon (10.0.0.1:43040). Apr 24 23:39:52.725381 systemd-logind[1438]: Removed session 5. Apr 24 23:39:52.759726 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43040 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.760794 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.763973 systemd-logind[1438]: New session 6 of user core. Apr 24 23:39:52.775584 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:39:52.825415 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:39:52.825766 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:39:52.828864 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 24 23:39:52.832963 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:39:52.833233 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:39:52.857767 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:39:52.859041 auditctl[1603]: No rules Apr 24 23:39:52.859330 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:39:52.859548 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:39:52.861552 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:39:52.883362 augenrules[1621]: No rules Apr 24 23:39:52.884416 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:39:52.885380 sudo[1599]: pam_unix(sudo:session): session closed for user root Apr 24 23:39:52.887167 sshd[1596]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:52.892137 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:43040.service: Deactivated successfully. Apr 24 23:39:52.893156 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:39:52.894102 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:39:52.895010 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:43056.service - OpenSSH per-connection server daemon (10.0.0.1:43056). Apr 24 23:39:52.895524 systemd-logind[1438]: Removed session 6. Apr 24 23:39:52.930187 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 43056 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:39:52.931337 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.934521 systemd-logind[1438]: New session 7 of user core. Apr 24 23:39:52.941611 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:39:52.991808 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:39:52.992012 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:39:53.221730 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:39:53.221861 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:39:53.446377 dockerd[1650]: time="2026-04-24T23:39:53.446303371Z" level=info msg="Starting up" Apr 24 23:39:53.555515 dockerd[1650]: time="2026-04-24T23:39:53.555379547Z" level=info msg="Loading containers: start." Apr 24 23:39:53.656491 kernel: Initializing XFRM netlink socket Apr 24 23:39:53.733211 systemd-networkd[1379]: docker0: Link UP Apr 24 23:39:53.755839 dockerd[1650]: time="2026-04-24T23:39:53.755791001Z" level=info msg="Loading containers: done." Apr 24 23:39:53.766611 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck591164046-merged.mount: Deactivated successfully. Apr 24 23:39:53.768278 dockerd[1650]: time="2026-04-24T23:39:53.768222532Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:39:53.768382 dockerd[1650]: time="2026-04-24T23:39:53.768345514Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:39:53.768523 dockerd[1650]: time="2026-04-24T23:39:53.768506947Z" level=info msg="Daemon has completed initialization" Apr 24 23:39:53.797428 dockerd[1650]: time="2026-04-24T23:39:53.797061675Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:39:53.797714 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:39:54.186775 containerd[1458]: time="2026-04-24T23:39:54.186734793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 24 23:39:54.624610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839928817.mount: Deactivated successfully. Apr 24 23:39:55.146898 containerd[1458]: time="2026-04-24T23:39:55.146854279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:55.147397 containerd[1458]: time="2026-04-24T23:39:55.147367817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 24 23:39:55.148104 containerd[1458]: time="2026-04-24T23:39:55.148051343Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:55.150183 containerd[1458]: time="2026-04-24T23:39:55.150161236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:55.151049 containerd[1458]: time="2026-04-24T23:39:55.151015303Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 964.252678ms" Apr 24 23:39:55.151085 containerd[1458]: time="2026-04-24T23:39:55.151058316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 24 23:39:55.151570 containerd[1458]: time="2026-04-24T23:39:55.151537804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 24 23:39:55.985770 containerd[1458]: time="2026-04-24T23:39:55.985704192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:55.986425 containerd[1458]: time="2026-04-24T23:39:55.986375671Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 24 23:39:55.987184 containerd[1458]: time="2026-04-24T23:39:55.987125648Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:55.989628 containerd[1458]: time="2026-04-24T23:39:55.989574120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:55.990599 containerd[1458]: time="2026-04-24T23:39:55.990576456Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 839.00888ms" Apr 24 23:39:55.990644 containerd[1458]: time="2026-04-24T23:39:55.990604342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 24 23:39:55.991099 containerd[1458]: time="2026-04-24T23:39:55.991079428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 24 23:39:57.144704 containerd[1458]: time="2026-04-24T23:39:57.144646045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:57.145286 containerd[1458]: time="2026-04-24T23:39:57.145245676Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 24 23:39:57.146323 containerd[1458]: time="2026-04-24T23:39:57.146276830Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:57.148550 containerd[1458]: time="2026-04-24T23:39:57.148514353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:57.149390 containerd[1458]: time="2026-04-24T23:39:57.149368531Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.158261634s" Apr 24 23:39:57.149420 containerd[1458]: time="2026-04-24T23:39:57.149395615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 24 23:39:57.149999 containerd[1458]: time="2026-04-24T23:39:57.149867205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 24 23:39:58.280299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601848084.mount: Deactivated successfully. Apr 24 23:39:58.641991 containerd[1458]: time="2026-04-24T23:39:58.641946821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:58.642832 containerd[1458]: time="2026-04-24T23:39:58.642784017Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 24 23:39:58.643244 containerd[1458]: time="2026-04-24T23:39:58.643207543Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:58.644756 containerd[1458]: time="2026-04-24T23:39:58.644722682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:58.645035 containerd[1458]: time="2026-04-24T23:39:58.645000959Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.495110835s" Apr 24 23:39:58.645086 containerd[1458]: time="2026-04-24T23:39:58.645050040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 24 23:39:58.645536 containerd[1458]: time="2026-04-24T23:39:58.645509765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 24 23:39:58.800335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:39:58.810663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:39:58.914114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:39:58.917245 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:39:58.974035 kubelet[1881]: E0424 23:39:58.973932 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:39:58.976842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:39:58.976958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:39:59.044952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734646578.mount: Deactivated successfully. Apr 24 23:39:59.872106 containerd[1458]: time="2026-04-24T23:39:59.872049281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:59.872614 containerd[1458]: time="2026-04-24T23:39:59.872560559Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 24 23:39:59.873484 containerd[1458]: time="2026-04-24T23:39:59.873416674Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:59.875788 containerd[1458]: time="2026-04-24T23:39:59.875744089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:39:59.876924 containerd[1458]: time="2026-04-24T23:39:59.876892532Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.231332176s" Apr 24 23:39:59.876973 containerd[1458]: time="2026-04-24T23:39:59.876922825Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 24 23:39:59.877557 containerd[1458]: time="2026-04-24T23:39:59.877527489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 23:40:00.360590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142409897.mount: Deactivated successfully. Apr 24 23:40:00.364554 containerd[1458]: time="2026-04-24T23:40:00.364502239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:00.364952 containerd[1458]: time="2026-04-24T23:40:00.364902581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 24 23:40:00.367594 containerd[1458]: time="2026-04-24T23:40:00.367553151Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:00.368992 containerd[1458]: time="2026-04-24T23:40:00.368959426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:00.369526 containerd[1458]: time="2026-04-24T23:40:00.369491769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 491.939565ms" Apr 24 23:40:00.369556 containerd[1458]: time="2026-04-24T23:40:00.369523073Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 23:40:00.369992 containerd[1458]: time="2026-04-24T23:40:00.369967633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 24 23:40:00.716185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280185581.mount: Deactivated successfully. Apr 24 23:40:01.558096 containerd[1458]: time="2026-04-24T23:40:01.558005560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:01.563794 containerd[1458]: time="2026-04-24T23:40:01.563751199Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 24 23:40:01.565804 containerd[1458]: time="2026-04-24T23:40:01.565714787Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:01.567923 containerd[1458]: time="2026-04-24T23:40:01.567899920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:01.568768 containerd[1458]: time="2026-04-24T23:40:01.568645800Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.198655839s" Apr 24 23:40:01.568768 containerd[1458]: time="2026-04-24T23:40:01.568754718Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 24 23:40:02.749243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:40:02.757903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:40:02.777518 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit session-7.scope)... Apr 24 23:40:02.777536 systemd[1]: Reloading... Apr 24 23:40:02.842526 zram_generator::config[2085]: No configuration found. Apr 24 23:40:02.917921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:40:02.964040 systemd[1]: Reloading finished in 186 ms. Apr 24 23:40:03.008294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:40:03.010644 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:40:03.010900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:40:03.012447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:40:03.124998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:40:03.129282 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:40:03.178231 kubelet[2132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:40:03.278700 kubelet[2132]: I0424 23:40:03.278551 2132 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 23:40:03.278700 kubelet[2132]: I0424 23:40:03.278599 2132 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:40:03.278700 kubelet[2132]: I0424 23:40:03.278615 2132 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:40:03.278700 kubelet[2132]: I0424 23:40:03.278619 2132 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:40:03.279116 kubelet[2132]: I0424 23:40:03.279083 2132 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 23:40:03.316634 kubelet[2132]: I0424 23:40:03.315369 2132 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:40:03.319427 kubelet[2132]: E0424 23:40:03.319353 2132 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:40:03.323302 kubelet[2132]: E0424 23:40:03.323270 2132 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:40:03.323387 kubelet[2132]: I0424 23:40:03.323337 2132 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:40:03.328918 kubelet[2132]: I0424 23:40:03.328887 2132 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:40:03.330025 kubelet[2132]: I0424 23:40:03.329975 2132 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:40:03.330170 kubelet[2132]: I0424 23:40:03.330010 2132 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:40:03.330170 kubelet[2132]: I0424 23:40:03.330159 2132 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 23:40:03.330170 kubelet[2132]: I0424 23:40:03.330166 2132 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 23:40:03.330336 kubelet[2132]: I0424 23:40:03.330280 2132 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:40:03.332715 kubelet[2132]: I0424 23:40:03.332657 2132 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 23:40:03.333226 kubelet[2132]: I0424 23:40:03.333145 2132 kubelet.go:482] "Attempting to sync node with API server" Apr 24 23:40:03.333226 kubelet[2132]: I0424 23:40:03.333213 2132 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:40:03.333387 kubelet[2132]: I0424 23:40:03.333367 2132 kubelet.go:394] "Adding apiserver pod source" Apr 24 23:40:03.333387 kubelet[2132]: I0424 23:40:03.333390 2132 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:40:03.336496 kubelet[2132]: I0424 23:40:03.336154 2132 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:40:03.338056 kubelet[2132]: I0424 23:40:03.338028 2132 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:40:03.338102 kubelet[2132]: I0424 23:40:03.338061 2132 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:40:03.338129 kubelet[2132]: W0424 23:40:03.338106 2132 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:40:03.340683 kubelet[2132]: I0424 23:40:03.340529 2132 server.go:1257] "Started kubelet" Apr 24 23:40:03.341516 kubelet[2132]: I0424 23:40:03.340961 2132 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:40:03.341516 kubelet[2132]: I0424 23:40:03.341037 2132 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:40:03.341516 kubelet[2132]: I0424 23:40:03.341353 2132 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:40:03.341516 kubelet[2132]: I0424 23:40:03.341436 2132 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:40:03.342010 kubelet[2132]: I0424 23:40:03.341700 2132 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 23:40:03.342828 kubelet[2132]: I0424 23:40:03.342534 2132 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:40:03.345136 kubelet[2132]: I0424 23:40:03.345121 2132 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:40:03.347920 kubelet[2132]: E0424 23:40:03.347831 2132 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:40:03.348011 kubelet[2132]: I0424 23:40:03.348005 2132 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 23:40:03.348273 kubelet[2132]: I0424 23:40:03.348263 2132 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:40:03.348392 kubelet[2132]: I0424 23:40:03.348387 2132 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:40:03.349083 kubelet[2132]: E0424 23:40:03.349065 2132 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" Apr 24 23:40:03.349357 kubelet[2132]: I0424 23:40:03.349341 2132 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:40:03.349612 kubelet[2132]: E0424 23:40:03.348040 2132 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a96f6404b1116f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 23:40:03.340497263 +0000 UTC m=+0.206762853,LastTimestamp:2026-04-24 23:40:03.340497263 +0000 UTC m=+0.206762853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 23:40:03.350623 kubelet[2132]: E0424 23:40:03.350594 2132 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:40:03.351447 kubelet[2132]: I0424 23:40:03.351419 2132 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:40:03.351447 kubelet[2132]: I0424 23:40:03.351443 2132 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:40:03.364598 kubelet[2132]: I0424 23:40:03.364551 2132 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:40:03.365639 kubelet[2132]: I0424 23:40:03.365603 2132 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:40:03.365639 kubelet[2132]: I0424 23:40:03.365632 2132 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 23:40:03.365710 kubelet[2132]: I0424 23:40:03.365657 2132 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 23:40:03.365710 kubelet[2132]: E0424 23:40:03.365692 2132 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:40:03.369035 kubelet[2132]: I0424 23:40:03.368581 2132 cpu_manager.go:225] "Starting" policy="none" Apr 24 23:40:03.369035 kubelet[2132]: I0424 23:40:03.368596 2132 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 23:40:03.369035 kubelet[2132]: I0424 23:40:03.368609 2132 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 23:40:03.371548 kubelet[2132]: I0424 23:40:03.371500 2132 policy_none.go:50] "Start" Apr 24 23:40:03.371548 kubelet[2132]: I0424 23:40:03.371547 2132 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:40:03.371548 kubelet[2132]: I0424 23:40:03.371556 2132 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:40:03.373518 kubelet[2132]: I0424 23:40:03.373498 2132 policy_none.go:44] "Start" Apr 24 23:40:03.376789 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:40:03.393813 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:40:03.396669 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:40:03.406393 kubelet[2132]: E0424 23:40:03.406341 2132 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:40:03.406860 kubelet[2132]: I0424 23:40:03.406559 2132 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 23:40:03.406860 kubelet[2132]: I0424 23:40:03.406574 2132 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:40:03.406860 kubelet[2132]: I0424 23:40:03.406786 2132 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 23:40:03.407785 kubelet[2132]: E0424 23:40:03.407745 2132 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:40:03.407844 kubelet[2132]: E0424 23:40:03.407825 2132 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 23:40:03.477373 systemd[1]: Created slice kubepods-burstable-pod786f7650179152c4729a905674de546d.slice - libcontainer container kubepods-burstable-pod786f7650179152c4729a905674de546d.slice. Apr 24 23:40:03.525776 kubelet[2132]: E0424 23:40:03.524437 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:03.529185 kubelet[2132]: I0424 23:40:03.528718 2132 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:40:03.539128 kubelet[2132]: E0424 23:40:03.538102 2132 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 24 23:40:03.549487 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 24 23:40:03.551512 kubelet[2132]: I0424 23:40:03.550271 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:03.551512 kubelet[2132]: E0424 23:40:03.550703 2132 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" Apr 24 23:40:03.551512 kubelet[2132]: I0424 23:40:03.550714 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:03.551512 kubelet[2132]: I0424 23:40:03.550826 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:03.551512 kubelet[2132]: I0424 23:40:03.551260 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/786f7650179152c4729a905674de546d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"786f7650179152c4729a905674de546d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:03.552387 kubelet[2132]: I0424 23:40:03.552297 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/786f7650179152c4729a905674de546d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"786f7650179152c4729a905674de546d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:03.552509 kubelet[2132]: I0424 23:40:03.552477 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/786f7650179152c4729a905674de546d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"786f7650179152c4729a905674de546d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:03.552566 kubelet[2132]: I0424 23:40:03.552522 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:03.552566 kubelet[2132]: I0424 23:40:03.552559 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:03.552626 kubelet[2132]: I0424 23:40:03.552595 2132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:03.565441 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 24 23:40:03.565653 kubelet[2132]: E0424 23:40:03.565576 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:03.572282 kubelet[2132]: E0424 23:40:03.571925 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:03.742814 kubelet[2132]: I0424 23:40:03.742756 2132 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:40:03.743287 kubelet[2132]: E0424 23:40:03.743249 2132 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 24 23:40:03.835559 kubelet[2132]: E0424 23:40:03.835448 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:03.839857 containerd[1458]: time="2026-04-24T23:40:03.839703003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:786f7650179152c4729a905674de546d,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:03.872635 kubelet[2132]: E0424 23:40:03.872538 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:03.874073 containerd[1458]: time="2026-04-24T23:40:03.873997883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:03.879537 kubelet[2132]: E0424 23:40:03.879429 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:03.880423 containerd[1458]: time="2026-04-24T23:40:03.880374560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:03.952475 kubelet[2132]: E0424 23:40:03.952276 2132 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" Apr 24 23:40:04.146211 kubelet[2132]: I0424 23:40:04.146044 2132 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:40:04.146757 kubelet[2132]: E0424 23:40:04.146705 2132 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 24 23:40:04.263275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604440034.mount: Deactivated successfully. Apr 24 23:40:04.271437 containerd[1458]: time="2026-04-24T23:40:04.271360204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:40:04.272141 containerd[1458]: time="2026-04-24T23:40:04.272119029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:40:04.272646 containerd[1458]: time="2026-04-24T23:40:04.272621437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:40:04.273538 containerd[1458]: time="2026-04-24T23:40:04.273510867Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:40:04.274196 containerd[1458]: time="2026-04-24T23:40:04.274157920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 24 23:40:04.274997 containerd[1458]: time="2026-04-24T23:40:04.274944177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:40:04.275868 containerd[1458]: time="2026-04-24T23:40:04.275804906Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:40:04.279163 containerd[1458]: time="2026-04-24T23:40:04.279082287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:40:04.279854 containerd[1458]: time="2026-04-24T23:40:04.279813911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 405.707506ms" Apr 24 23:40:04.280636 containerd[1458]: time="2026-04-24T23:40:04.280597334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 440.743108ms" Apr 24 23:40:04.282792 containerd[1458]: time="2026-04-24T23:40:04.281629907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.172698ms" Apr 24 23:40:04.513897 containerd[1458]: time="2026-04-24T23:40:04.513582275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:04.513897 containerd[1458]: time="2026-04-24T23:40:04.513634954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:04.513897 containerd[1458]: time="2026-04-24T23:40:04.513644122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:04.515111 containerd[1458]: time="2026-04-24T23:40:04.513905856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:04.515111 containerd[1458]: time="2026-04-24T23:40:04.513867442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:04.515111 containerd[1458]: time="2026-04-24T23:40:04.513912362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:04.515111 containerd[1458]: time="2026-04-24T23:40:04.513920313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:04.515111 containerd[1458]: time="2026-04-24T23:40:04.513985152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:04.516594 containerd[1458]: time="2026-04-24T23:40:04.515805452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:04.516594 containerd[1458]: time="2026-04-24T23:40:04.515848614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:04.516594 containerd[1458]: time="2026-04-24T23:40:04.515905924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:04.516594 containerd[1458]: time="2026-04-24T23:40:04.515996216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:04.589715 systemd[1]: Started cri-containerd-08a2ce631fbebe33db8113cafe9c86ebb3ce0d740e2f73b7c58fdefbb785ff7e.scope - libcontainer container 08a2ce631fbebe33db8113cafe9c86ebb3ce0d740e2f73b7c58fdefbb785ff7e. Apr 24 23:40:04.602263 systemd[1]: Started cri-containerd-cfc90b21b998a73eb4f83efd0c51c78b330dc10d1af42cb20b415cec350076b5.scope - libcontainer container cfc90b21b998a73eb4f83efd0c51c78b330dc10d1af42cb20b415cec350076b5. Apr 24 23:40:04.606823 systemd[1]: Started cri-containerd-ce4c61a689fc0c85372dd60de0345c404a1a15cad853e04aacb87d56a4b28f64.scope - libcontainer container ce4c61a689fc0c85372dd60de0345c404a1a15cad853e04aacb87d56a4b28f64. Apr 24 23:40:04.724666 containerd[1458]: time="2026-04-24T23:40:04.724533436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc90b21b998a73eb4f83efd0c51c78b330dc10d1af42cb20b415cec350076b5\"" Apr 24 23:40:04.725063 containerd[1458]: time="2026-04-24T23:40:04.725044766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce4c61a689fc0c85372dd60de0345c404a1a15cad853e04aacb87d56a4b28f64\"" Apr 24 23:40:04.725763 containerd[1458]: time="2026-04-24T23:40:04.725734279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:786f7650179152c4729a905674de546d,Namespace:kube-system,Attempt:0,} returns sandbox id \"08a2ce631fbebe33db8113cafe9c86ebb3ce0d740e2f73b7c58fdefbb785ff7e\"" Apr 24 23:40:04.726269 kubelet[2132]: E0424 23:40:04.726228 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:04.726521 kubelet[2132]: E0424 23:40:04.726402 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:04.726691 kubelet[2132]: E0424 23:40:04.726676 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:04.731580 containerd[1458]: time="2026-04-24T23:40:04.731523687Z" level=info msg="CreateContainer within sandbox \"08a2ce631fbebe33db8113cafe9c86ebb3ce0d740e2f73b7c58fdefbb785ff7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:40:04.734142 containerd[1458]: time="2026-04-24T23:40:04.734018223Z" level=info msg="CreateContainer within sandbox \"cfc90b21b998a73eb4f83efd0c51c78b330dc10d1af42cb20b415cec350076b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:40:04.735719 containerd[1458]: time="2026-04-24T23:40:04.735701589Z" level=info msg="CreateContainer within sandbox \"ce4c61a689fc0c85372dd60de0345c404a1a15cad853e04aacb87d56a4b28f64\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:40:04.742965 containerd[1458]: time="2026-04-24T23:40:04.742888227Z" level=info msg="CreateContainer within sandbox \"08a2ce631fbebe33db8113cafe9c86ebb3ce0d740e2f73b7c58fdefbb785ff7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"541fa37c2e587da2ec6339fd2fb40e9631a4da33e6c0d74d19371adfd699d0a8\"" Apr 24 23:40:04.744161 containerd[1458]: time="2026-04-24T23:40:04.744127347Z" level=info msg="StartContainer for \"541fa37c2e587da2ec6339fd2fb40e9631a4da33e6c0d74d19371adfd699d0a8\"" Apr 24 23:40:04.751480 containerd[1458]: time="2026-04-24T23:40:04.751405890Z" level=info msg="CreateContainer within sandbox \"cfc90b21b998a73eb4f83efd0c51c78b330dc10d1af42cb20b415cec350076b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c017fc01474828f7cc5e6ed61ab0b954bbf9262e522883cfb41eeaddea68f19d\"" Apr 24 23:40:04.754038 containerd[1458]: time="2026-04-24T23:40:04.753065103Z" level=info msg="StartContainer for \"c017fc01474828f7cc5e6ed61ab0b954bbf9262e522883cfb41eeaddea68f19d\"" Apr 24 23:40:04.754126 kubelet[2132]: E0424 23:40:04.753201 2132 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" Apr 24 23:40:04.755224 containerd[1458]: time="2026-04-24T23:40:04.755112865Z" level=info msg="CreateContainer within sandbox \"ce4c61a689fc0c85372dd60de0345c404a1a15cad853e04aacb87d56a4b28f64\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"94caf97ff0f25f8e70f1fd641386e074b5a251b35933a2a638b9a2030629ddd2\"" Apr 24 23:40:04.756855 containerd[1458]: time="2026-04-24T23:40:04.756835763Z" level=info msg="StartContainer for \"94caf97ff0f25f8e70f1fd641386e074b5a251b35933a2a638b9a2030629ddd2\"" Apr 24 23:40:04.774680 systemd[1]: Started cri-containerd-541fa37c2e587da2ec6339fd2fb40e9631a4da33e6c0d74d19371adfd699d0a8.scope - libcontainer container 541fa37c2e587da2ec6339fd2fb40e9631a4da33e6c0d74d19371adfd699d0a8. Apr 24 23:40:04.779904 systemd[1]: Started cri-containerd-94caf97ff0f25f8e70f1fd641386e074b5a251b35933a2a638b9a2030629ddd2.scope - libcontainer container 94caf97ff0f25f8e70f1fd641386e074b5a251b35933a2a638b9a2030629ddd2. Apr 24 23:40:04.798858 systemd[1]: Started cri-containerd-c017fc01474828f7cc5e6ed61ab0b954bbf9262e522883cfb41eeaddea68f19d.scope - libcontainer container c017fc01474828f7cc5e6ed61ab0b954bbf9262e522883cfb41eeaddea68f19d. Apr 24 23:40:04.884423 containerd[1458]: time="2026-04-24T23:40:04.884357658Z" level=info msg="StartContainer for \"541fa37c2e587da2ec6339fd2fb40e9631a4da33e6c0d74d19371adfd699d0a8\" returns successfully" Apr 24 23:40:04.889529 containerd[1458]: time="2026-04-24T23:40:04.889492631Z" level=info msg="StartContainer for \"94caf97ff0f25f8e70f1fd641386e074b5a251b35933a2a638b9a2030629ddd2\" returns successfully" Apr 24 23:40:04.918808 containerd[1458]: time="2026-04-24T23:40:04.918751662Z" level=info msg="StartContainer for \"c017fc01474828f7cc5e6ed61ab0b954bbf9262e522883cfb41eeaddea68f19d\" returns successfully" Apr 24 23:40:04.963030 kubelet[2132]: I0424 23:40:04.962948 2132 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:40:05.245192 kernel: hrtimer: interrupt took 6869859 ns Apr 24 23:40:05.379303 kubelet[2132]: E0424 23:40:05.379253 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:05.380729 kubelet[2132]: E0424 23:40:05.380517 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:05.380729 kubelet[2132]: E0424 23:40:05.379864 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:05.380729 kubelet[2132]: E0424 23:40:05.380665 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:05.381406 kubelet[2132]: E0424 23:40:05.381263 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:05.381406 kubelet[2132]: E0424 23:40:05.381371 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:06.386489 kubelet[2132]: E0424 23:40:06.385521 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:06.386489 kubelet[2132]: E0424 23:40:06.385751 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:06.386489 kubelet[2132]: E0424 23:40:06.386157 2132 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 23:40:06.386489 kubelet[2132]: E0424 23:40:06.386281 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:06.587855 kubelet[2132]: E0424 23:40:06.587770 2132 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 23:40:06.840186 kubelet[2132]: I0424 23:40:06.840110 2132 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 24 23:40:06.840330 kubelet[2132]: E0424 23:40:06.840236 2132 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 24 23:40:06.876962 kubelet[2132]: E0424 23:40:06.876893 2132 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:40:06.977652 kubelet[2132]: E0424 23:40:06.977555 2132 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:40:07.078725 kubelet[2132]: E0424 23:40:07.078635 2132 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:40:07.179034 kubelet[2132]: E0424 23:40:07.178827 2132 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:40:07.279106 kubelet[2132]: E0424 23:40:07.279020 2132 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 23:40:07.350362 kubelet[2132]: I0424 23:40:07.349908 2132 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:07.355339 kubelet[2132]: E0424 23:40:07.355301 2132 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:07.355339 kubelet[2132]: I0424 23:40:07.355327 2132 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:07.356870 kubelet[2132]: E0424 23:40:07.356838 2132 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:07.356870 kubelet[2132]: I0424 23:40:07.356858 2132 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:07.358175 kubelet[2132]: E0424 23:40:07.358153 2132 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:07.384511 kubelet[2132]: I0424 23:40:07.384449 2132 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:07.389396 kubelet[2132]: E0424 23:40:07.389320 2132 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:07.389856 kubelet[2132]: E0424 23:40:07.389616 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:08.339577 kubelet[2132]: I0424 23:40:08.339118 2132 apiserver.go:52] "Watching apiserver" Apr 24 23:40:08.349204 kubelet[2132]: I0424 23:40:08.349079 2132 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:40:09.078880 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Apr 24 23:40:09.078898 systemd[1]: Reloading... Apr 24 23:40:09.185508 zram_generator::config[2463]: No configuration found. Apr 24 23:40:09.221446 kubelet[2132]: I0424 23:40:09.221404 2132 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:09.231557 kubelet[2132]: E0424 23:40:09.231512 2132 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:09.251710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:40:09.304508 systemd[1]: Reloading finished in 225 ms. Apr 24 23:40:09.333142 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:40:09.357735 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:40:09.357960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:40:09.358035 systemd[1]: kubelet.service: Consumed 1.059s CPU time, 131.8M memory peak, 0B memory swap peak. Apr 24 23:40:09.367773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:40:09.488829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:40:09.492716 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:40:09.578495 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:40:09.583902 kubelet[2505]: I0424 23:40:09.583755 2505 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 23:40:09.583902 kubelet[2505]: I0424 23:40:09.583788 2505 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:40:09.583902 kubelet[2505]: I0424 23:40:09.583801 2505 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:40:09.583902 kubelet[2505]: I0424 23:40:09.583804 2505 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:40:09.584084 kubelet[2505]: I0424 23:40:09.584061 2505 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 23:40:09.585133 kubelet[2505]: I0424 23:40:09.585107 2505 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:40:09.588689 kubelet[2505]: I0424 23:40:09.588633 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:40:09.590857 kubelet[2505]: E0424 23:40:09.590815 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:40:09.590909 kubelet[2505]: I0424 23:40:09.590874 2505 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:40:09.594150 kubelet[2505]: I0424 23:40:09.594103 2505 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:40:09.594424 kubelet[2505]: I0424 23:40:09.594374 2505 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:40:09.594634 kubelet[2505]: I0424 23:40:09.594411 2505 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:40:09.594743 kubelet[2505]: I0424 23:40:09.594633 2505 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 23:40:09.594743 kubelet[2505]: I0424 23:40:09.594644 2505 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 23:40:09.594743 kubelet[2505]: I0424 23:40:09.594667 2505 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:40:09.594889 kubelet[2505]: I0424 23:40:09.594868 2505 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 23:40:09.595045 kubelet[2505]: I0424 23:40:09.595032 2505 kubelet.go:482] "Attempting to sync node with API server" Apr 24 23:40:09.595062 kubelet[2505]: I0424 23:40:09.595050 2505 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:40:09.595078 kubelet[2505]: I0424 23:40:09.595068 2505 kubelet.go:394] "Adding apiserver pod source" Apr 24 23:40:09.595078 kubelet[2505]: I0424 23:40:09.595076 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:40:09.599645 kubelet[2505]: I0424 23:40:09.599562 2505 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:40:09.602271 kubelet[2505]: I0424 23:40:09.602237 2505 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:40:09.602271 kubelet[2505]: I0424 23:40:09.602272 2505 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:40:09.605991 kubelet[2505]: I0424 23:40:09.604568 2505 server.go:1257] "Started kubelet" Apr 24 23:40:09.605991 kubelet[2505]: I0424 23:40:09.604655 2505 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:40:09.605991 kubelet[2505]: I0424 23:40:09.605521 2505 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:40:09.605991 kubelet[2505]: I0424 23:40:09.605682 2505 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 23:40:09.606504 kubelet[2505]: I0424 23:40:09.606424 2505 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:40:09.606760 kubelet[2505]: I0424 23:40:09.606739 2505 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:40:09.606955 kubelet[2505]: I0424 23:40:09.606939 2505 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:40:09.607763 kubelet[2505]: I0424 23:40:09.607733 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:40:09.609747 kubelet[2505]: I0424 23:40:09.609708 2505 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 23:40:09.609962 kubelet[2505]: I0424 23:40:09.609947 2505 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:40:09.610057 kubelet[2505]: I0424 23:40:09.610045 2505 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:40:09.614514 kubelet[2505]: I0424 23:40:09.613650 2505 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:40:09.614514 kubelet[2505]: I0424 23:40:09.613807 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:40:09.615270 kubelet[2505]: I0424 23:40:09.615258 2505 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:40:09.623572 kubelet[2505]: I0424 23:40:09.623450 2505 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:40:09.625018 kubelet[2505]: I0424 23:40:09.624995 2505 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:40:09.625018 kubelet[2505]: I0424 23:40:09.625019 2505 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 23:40:09.625127 kubelet[2505]: I0424 23:40:09.625034 2505 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 23:40:09.625127 kubelet[2505]: E0424 23:40:09.625071 2505 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:40:09.803304 kubelet[2505]: E0424 23:40:09.803042 2505 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 23:40:09.826088 kubelet[2505]: I0424 23:40:09.826068 2505 cpu_manager.go:225] "Starting" policy="none" Apr 24 23:40:09.826497 kubelet[2505]: I0424 23:40:09.826483 2505 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 23:40:09.826569 kubelet[2505]: I0424 23:40:09.826564 2505 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 23:40:09.826711 kubelet[2505]: I0424 23:40:09.826702 2505 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 24 23:40:09.826757 kubelet[2505]: I0424 23:40:09.826744 2505 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 24 23:40:09.826781 kubelet[2505]: I0424 23:40:09.826778 2505 policy_none.go:50] "Start" Apr 24 23:40:09.826822 kubelet[2505]: I0424 23:40:09.826817 2505 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:40:09.826894 kubelet[2505]: I0424 23:40:09.826889 2505 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:40:09.827025 kubelet[2505]: I0424 23:40:09.827018 2505 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 23:40:09.827057 kubelet[2505]: I0424 23:40:09.827054 2505 policy_none.go:44] "Start" Apr 24 23:40:09.830353 kubelet[2505]: E0424 23:40:09.830323 2505 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:40:09.830818 kubelet[2505]: I0424 23:40:09.830494 2505 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 23:40:09.830818 kubelet[2505]: I0424 23:40:09.830505 2505 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:40:09.830818 kubelet[2505]: I0424 23:40:09.830650 2505 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 23:40:09.831210 kubelet[2505]: E0424 23:40:09.831177 2505 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:40:09.939772 kubelet[2505]: I0424 23:40:09.939516 2505 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 24 23:40:09.950597 kubelet[2505]: I0424 23:40:09.950557 2505 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 24 23:40:09.950821 kubelet[2505]: I0424 23:40:09.950671 2505 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 24 23:40:10.005602 kubelet[2505]: I0424 23:40:10.005262 2505 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:10.005602 kubelet[2505]: I0424 23:40:10.005352 2505 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:10.005602 kubelet[2505]: I0424 23:40:10.005583 2505 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:10.015227 kubelet[2505]: E0424 23:40:10.015173 2505 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:10.081526 sudo[2548]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 24 23:40:10.081748 sudo[2548]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 24 23:40:10.105594 kubelet[2505]: I0424 23:40:10.105527 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/786f7650179152c4729a905674de546d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"786f7650179152c4729a905674de546d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:10.105594 kubelet[2505]: I0424 23:40:10.105576 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:10.105764 kubelet[2505]: I0424 23:40:10.105602 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:10.105764 kubelet[2505]: I0424 23:40:10.105649 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:10.105764 kubelet[2505]: I0424 23:40:10.105664 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 24 23:40:10.105764 kubelet[2505]: I0424 23:40:10.105677 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/786f7650179152c4729a905674de546d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"786f7650179152c4729a905674de546d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:10.105764 kubelet[2505]: I0424 23:40:10.105690 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/786f7650179152c4729a905674de546d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"786f7650179152c4729a905674de546d\") " pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:10.105886 kubelet[2505]: I0424 23:40:10.105702 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:10.105886 kubelet[2505]: I0424 23:40:10.105714 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 23:40:10.316000 kubelet[2505]: E0424 23:40:10.315930 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:10.316000 kubelet[2505]: E0424 23:40:10.316020 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:10.316361 kubelet[2505]: E0424 23:40:10.316133 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:10.596706 kubelet[2505]: I0424 23:40:10.596506 2505 apiserver.go:52] "Watching apiserver" Apr 24 23:40:10.611179 kubelet[2505]: I0424 23:40:10.611101 2505 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:40:10.705676 sudo[2548]: pam_unix(sudo:session): session closed for user root Apr 24 23:40:10.811489 kubelet[2505]: E0424 23:40:10.810265 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:10.811489 kubelet[2505]: I0424 23:40:10.810307 2505 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:10.811489 kubelet[2505]: E0424 23:40:10.810786 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:10.817415 kubelet[2505]: E0424 23:40:10.817389 2505 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 23:40:10.817585 kubelet[2505]: E0424 23:40:10.817535 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:10.880278 kubelet[2505]: I0424 23:40:10.880009 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.879997832 podStartE2EDuration="1.879997832s" podCreationTimestamp="2026-04-24 23:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:10.871439827 +0000 UTC m=+1.365650610" watchObservedRunningTime="2026-04-24 23:40:10.879997832 +0000 UTC m=+1.374208632" Apr 24 23:40:10.887069 kubelet[2505]: I0424 23:40:10.887021 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.887010829 podStartE2EDuration="887.010829ms" podCreationTimestamp="2026-04-24 23:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:10.88037976 +0000 UTC m=+1.374590556" watchObservedRunningTime="2026-04-24 23:40:10.887010829 +0000 UTC m=+1.381221611" Apr 24 23:40:10.900715 kubelet[2505]: I0424 23:40:10.900580 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.894304132 podStartE2EDuration="894.304132ms" podCreationTimestamp="2026-04-24 23:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:10.887242487 +0000 UTC m=+1.381453286" watchObservedRunningTime="2026-04-24 23:40:10.894304132 +0000 UTC m=+1.388514916" Apr 24 23:40:11.811947 kubelet[2505]: E0424 23:40:11.811890 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:11.812262 kubelet[2505]: E0424 23:40:11.811964 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:12.269092 sudo[1632]: pam_unix(sudo:session): session closed for user root Apr 24 23:40:12.270346 sshd[1629]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:12.273105 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:43056.service: Deactivated successfully. Apr 24 23:40:12.274517 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:40:12.274849 systemd[1]: session-7.scope: Consumed 3.811s CPU time, 161.0M memory peak, 0B memory swap peak. Apr 24 23:40:12.275676 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:40:12.276553 systemd-logind[1438]: Removed session 7. Apr 24 23:40:12.813660 kubelet[2505]: E0424 23:40:12.813574 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:12.813660 kubelet[2505]: E0424 23:40:12.813590 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:13.177970 kubelet[2505]: E0424 23:40:13.177767 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:14.740793 kubelet[2505]: E0424 23:40:14.740743 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:15.025438 kubelet[2505]: I0424 23:40:15.025264 2505 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:40:15.025706 containerd[1458]: time="2026-04-24T23:40:15.025647748Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:40:15.025944 kubelet[2505]: I0424 23:40:15.025885 2505 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:40:15.732834 kubelet[2505]: E0424 23:40:15.732732 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.064076 systemd[1]: Created slice kubepods-besteffort-pod66188c7b_75f3_412f_ae4c_fa23b567d920.slice - libcontainer container kubepods-besteffort-pod66188c7b_75f3_412f_ae4c_fa23b567d920.slice. Apr 24 23:40:16.077952 systemd[1]: Created slice kubepods-burstable-pod6a15d670_4e6f_4959_8eb9_6c1c68d673df.slice - libcontainer container kubepods-burstable-pod6a15d670_4e6f_4959_8eb9_6c1c68d673df.slice. Apr 24 23:40:16.220800 systemd[1]: Created slice kubepods-besteffort-podbca8add5_f396_40b5_a164_6ab8c8f595a1.slice - libcontainer container kubepods-besteffort-podbca8add5_f396_40b5_a164_6ab8c8f595a1.slice. Apr 24 23:40:16.222249 kubelet[2505]: I0424 23:40:16.221767 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66188c7b-75f3-412f-ae4c-fa23b567d920-kube-proxy\") pod \"kube-proxy-cpjnv\" (UID: \"66188c7b-75f3-412f-ae4c-fa23b567d920\") " pod="kube-system/kube-proxy-cpjnv" Apr 24 23:40:16.222249 kubelet[2505]: I0424 23:40:16.221795 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-bpf-maps\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222249 kubelet[2505]: I0424 23:40:16.221811 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hostproc\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222249 kubelet[2505]: I0424 23:40:16.221827 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-lib-modules\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222249 kubelet[2505]: I0424 23:40:16.221838 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-run\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222249 kubelet[2505]: I0424 23:40:16.221851 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a15d670-4e6f-4959-8eb9-6c1c68d673df-clustermesh-secrets\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222704 kubelet[2505]: I0424 23:40:16.221868 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-config-path\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222704 kubelet[2505]: I0424 23:40:16.221882 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-kernel\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222704 kubelet[2505]: I0424 23:40:16.221895 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hubble-tls\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222704 kubelet[2505]: I0424 23:40:16.221908 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66188c7b-75f3-412f-ae4c-fa23b567d920-lib-modules\") pod \"kube-proxy-cpjnv\" (UID: \"66188c7b-75f3-412f-ae4c-fa23b567d920\") " pod="kube-system/kube-proxy-cpjnv" Apr 24 23:40:16.222704 kubelet[2505]: I0424 23:40:16.221921 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtsw\" (UniqueName: \"kubernetes.io/projected/66188c7b-75f3-412f-ae4c-fa23b567d920-kube-api-access-xbtsw\") pod \"kube-proxy-cpjnv\" (UID: \"66188c7b-75f3-412f-ae4c-fa23b567d920\") " pod="kube-system/kube-proxy-cpjnv" Apr 24 23:40:16.222790 kubelet[2505]: I0424 23:40:16.221955 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-cgroup\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222790 kubelet[2505]: I0424 23:40:16.221968 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cni-path\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222790 kubelet[2505]: I0424 23:40:16.221981 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-etc-cni-netd\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222790 kubelet[2505]: I0424 23:40:16.222008 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-net\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.222790 kubelet[2505]: I0424 23:40:16.222035 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66188c7b-75f3-412f-ae4c-fa23b567d920-xtables-lock\") pod \"kube-proxy-cpjnv\" (UID: \"66188c7b-75f3-412f-ae4c-fa23b567d920\") " pod="kube-system/kube-proxy-cpjnv" Apr 24 23:40:16.222790 kubelet[2505]: I0424 23:40:16.222048 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-xtables-lock\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.223577 kubelet[2505]: I0424 23:40:16.222067 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjpv\" (UniqueName: \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-kube-api-access-7sjpv\") pod \"cilium-s7fs6\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " pod="kube-system/cilium-s7fs6" Apr 24 23:40:16.323099 kubelet[2505]: I0424 23:40:16.322969 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjpq\" (UniqueName: \"kubernetes.io/projected/bca8add5-f396-40b5-a164-6ab8c8f595a1-kube-api-access-dxjpq\") pod \"cilium-operator-78cf5644cb-dg727\" (UID: \"bca8add5-f396-40b5-a164-6ab8c8f595a1\") " pod="kube-system/cilium-operator-78cf5644cb-dg727" Apr 24 23:40:16.326499 kubelet[2505]: I0424 23:40:16.323328 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bca8add5-f396-40b5-a164-6ab8c8f595a1-cilium-config-path\") pod \"cilium-operator-78cf5644cb-dg727\" (UID: \"bca8add5-f396-40b5-a164-6ab8c8f595a1\") " pod="kube-system/cilium-operator-78cf5644cb-dg727" Apr 24 23:40:16.375022 kubelet[2505]: E0424 23:40:16.374970 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.376663 containerd[1458]: time="2026-04-24T23:40:16.376589994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpjnv,Uid:66188c7b-75f3-412f-ae4c-fa23b567d920,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:16.383753 kubelet[2505]: E0424 23:40:16.383723 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.384160 containerd[1458]: time="2026-04-24T23:40:16.384060778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7fs6,Uid:6a15d670-4e6f-4959-8eb9-6c1c68d673df,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:16.403480 containerd[1458]: time="2026-04-24T23:40:16.403345235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:16.403480 containerd[1458]: time="2026-04-24T23:40:16.403422932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:16.403583 containerd[1458]: time="2026-04-24T23:40:16.403442735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:16.404322 containerd[1458]: time="2026-04-24T23:40:16.404255952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:16.411355 containerd[1458]: time="2026-04-24T23:40:16.411156941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:16.411355 containerd[1458]: time="2026-04-24T23:40:16.411206260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:16.411355 containerd[1458]: time="2026-04-24T23:40:16.411218748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:16.411355 containerd[1458]: time="2026-04-24T23:40:16.411287761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:16.422664 systemd[1]: Started cri-containerd-c1447bb9a31005d146083534ec20106eb3124c60122c780d13b957f49a80c1f9.scope - libcontainer container c1447bb9a31005d146083534ec20106eb3124c60122c780d13b957f49a80c1f9. Apr 24 23:40:16.428126 systemd[1]: Started cri-containerd-e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a.scope - libcontainer container e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a. Apr 24 23:40:16.446070 containerd[1458]: time="2026-04-24T23:40:16.446032371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpjnv,Uid:66188c7b-75f3-412f-ae4c-fa23b567d920,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1447bb9a31005d146083534ec20106eb3124c60122c780d13b957f49a80c1f9\"" Apr 24 23:40:16.446888 kubelet[2505]: E0424 23:40:16.446643 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.448336 containerd[1458]: time="2026-04-24T23:40:16.448305863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7fs6,Uid:6a15d670-4e6f-4959-8eb9-6c1c68d673df,Namespace:kube-system,Attempt:0,} returns sandbox id \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\"" Apr 24 23:40:16.450574 kubelet[2505]: E0424 23:40:16.450524 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.452169 containerd[1458]: time="2026-04-24T23:40:16.452139355Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 24 23:40:16.454337 containerd[1458]: time="2026-04-24T23:40:16.454293724Z" level=info msg="CreateContainer within sandbox \"c1447bb9a31005d146083534ec20106eb3124c60122c780d13b957f49a80c1f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:40:16.471812 containerd[1458]: time="2026-04-24T23:40:16.471743703Z" level=info msg="CreateContainer within sandbox \"c1447bb9a31005d146083534ec20106eb3124c60122c780d13b957f49a80c1f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"186bace64d2bd4b915fcbf83f56c7910a4f40a1865c118e6b90f186ceefc1e9d\"" Apr 24 23:40:16.473335 containerd[1458]: time="2026-04-24T23:40:16.472241468Z" level=info msg="StartContainer for \"186bace64d2bd4b915fcbf83f56c7910a4f40a1865c118e6b90f186ceefc1e9d\"" Apr 24 23:40:16.499625 systemd[1]: Started cri-containerd-186bace64d2bd4b915fcbf83f56c7910a4f40a1865c118e6b90f186ceefc1e9d.scope - libcontainer container 186bace64d2bd4b915fcbf83f56c7910a4f40a1865c118e6b90f186ceefc1e9d. Apr 24 23:40:16.525319 containerd[1458]: time="2026-04-24T23:40:16.525277973Z" level=info msg="StartContainer for \"186bace64d2bd4b915fcbf83f56c7910a4f40a1865c118e6b90f186ceefc1e9d\" returns successfully" Apr 24 23:40:16.526793 kubelet[2505]: E0424 23:40:16.526560 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.526983 containerd[1458]: time="2026-04-24T23:40:16.526946110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-dg727,Uid:bca8add5-f396-40b5-a164-6ab8c8f595a1,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:16.571412 containerd[1458]: time="2026-04-24T23:40:16.570739329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:16.571412 containerd[1458]: time="2026-04-24T23:40:16.570788494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:16.571412 containerd[1458]: time="2026-04-24T23:40:16.570801118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:16.571412 containerd[1458]: time="2026-04-24T23:40:16.571094736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:16.594028 systemd[1]: Started cri-containerd-707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569.scope - libcontainer container 707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569. Apr 24 23:40:16.628805 containerd[1458]: time="2026-04-24T23:40:16.628731875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-dg727,Uid:bca8add5-f396-40b5-a164-6ab8c8f595a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569\"" Apr 24 23:40:16.629371 kubelet[2505]: E0424 23:40:16.629349 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.834327 kubelet[2505]: E0424 23:40:16.834296 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:16.845627 kubelet[2505]: I0424 23:40:16.845407 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-cpjnv" podStartSLOduration=0.84539107 podStartE2EDuration="845.39107ms" podCreationTimestamp="2026-04-24 23:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:16.845025874 +0000 UTC m=+7.339236658" watchObservedRunningTime="2026-04-24 23:40:16.84539107 +0000 UTC m=+7.339601867" Apr 24 23:40:19.629125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917571121.mount: Deactivated successfully. Apr 24 23:40:22.449300 containerd[1458]: time="2026-04-24T23:40:22.449169647Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:22.449827 containerd[1458]: time="2026-04-24T23:40:22.449633992Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 24 23:40:22.450665 containerd[1458]: time="2026-04-24T23:40:22.450628038Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:22.451892 containerd[1458]: time="2026-04-24T23:40:22.451859196Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.999686323s" Apr 24 23:40:22.451935 containerd[1458]: time="2026-04-24T23:40:22.451892261Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 24 23:40:22.453676 containerd[1458]: time="2026-04-24T23:40:22.453651498Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 24 23:40:22.456941 containerd[1458]: time="2026-04-24T23:40:22.456913302Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 23:40:22.471209 containerd[1458]: time="2026-04-24T23:40:22.471157892Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\"" Apr 24 23:40:22.473833 containerd[1458]: time="2026-04-24T23:40:22.471791249Z" level=info msg="StartContainer for \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\"" Apr 24 23:40:22.503611 systemd[1]: Started cri-containerd-a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919.scope - libcontainer container a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919. Apr 24 23:40:22.525854 containerd[1458]: time="2026-04-24T23:40:22.525823619Z" level=info msg="StartContainer for \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\" returns successfully" Apr 24 23:40:22.539134 systemd[1]: cri-containerd-a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919.scope: Deactivated successfully. Apr 24 23:40:22.624430 containerd[1458]: time="2026-04-24T23:40:22.624240302Z" level=info msg="shim disconnected" id=a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919 namespace=k8s.io Apr 24 23:40:22.624430 containerd[1458]: time="2026-04-24T23:40:22.624295590Z" level=warning msg="cleaning up after shim disconnected" id=a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919 namespace=k8s.io Apr 24 23:40:22.624430 containerd[1458]: time="2026-04-24T23:40:22.624303395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:22.846551 kubelet[2505]: E0424 23:40:22.846517 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:22.853124 containerd[1458]: time="2026-04-24T23:40:22.853025274Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 23:40:22.866623 containerd[1458]: time="2026-04-24T23:40:22.866498807Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\"" Apr 24 23:40:22.869189 containerd[1458]: time="2026-04-24T23:40:22.869091687Z" level=info msg="StartContainer for \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\"" Apr 24 23:40:22.952843 systemd[1]: Started cri-containerd-619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb.scope - libcontainer container 619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb. Apr 24 23:40:22.988695 containerd[1458]: time="2026-04-24T23:40:22.988651225Z" level=info msg="StartContainer for \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\" returns successfully" Apr 24 23:40:22.994794 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:40:22.995053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:40:22.995106 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:40:23.003518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:40:23.003710 systemd[1]: cri-containerd-619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb.scope: Deactivated successfully. Apr 24 23:40:23.025257 containerd[1458]: time="2026-04-24T23:40:23.025188101Z" level=info msg="shim disconnected" id=619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb namespace=k8s.io Apr 24 23:40:23.025257 containerd[1458]: time="2026-04-24T23:40:23.025240619Z" level=warning msg="cleaning up after shim disconnected" id=619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb namespace=k8s.io Apr 24 23:40:23.025257 containerd[1458]: time="2026-04-24T23:40:23.025247709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:23.029967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:40:23.185588 kubelet[2505]: E0424 23:40:23.185407 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:23.471102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919-rootfs.mount: Deactivated successfully. Apr 24 23:40:23.740056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942226765.mount: Deactivated successfully. Apr 24 23:40:23.849755 kubelet[2505]: E0424 23:40:23.849711 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:23.855001 containerd[1458]: time="2026-04-24T23:40:23.854563077Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 23:40:23.875707 containerd[1458]: time="2026-04-24T23:40:23.875640668Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\"" Apr 24 23:40:23.877356 containerd[1458]: time="2026-04-24T23:40:23.877307791Z" level=info msg="StartContainer for \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\"" Apr 24 23:40:23.918661 systemd[1]: Started cri-containerd-8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4.scope - libcontainer container 8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4. Apr 24 23:40:23.949625 containerd[1458]: time="2026-04-24T23:40:23.949572603Z" level=info msg="StartContainer for \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\" returns successfully" Apr 24 23:40:23.950091 systemd[1]: cri-containerd-8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4.scope: Deactivated successfully. Apr 24 23:40:23.985943 containerd[1458]: time="2026-04-24T23:40:23.985861059Z" level=info msg="shim disconnected" id=8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4 namespace=k8s.io Apr 24 23:40:23.985943 containerd[1458]: time="2026-04-24T23:40:23.985929346Z" level=warning msg="cleaning up after shim disconnected" id=8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4 namespace=k8s.io Apr 24 23:40:23.985943 containerd[1458]: time="2026-04-24T23:40:23.985937422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:24.083775 containerd[1458]: time="2026-04-24T23:40:24.083724750Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:24.084335 containerd[1458]: time="2026-04-24T23:40:24.084298414Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 24 23:40:24.085174 containerd[1458]: time="2026-04-24T23:40:24.085137762Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:40:24.086177 containerd[1458]: time="2026-04-24T23:40:24.086141862Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.632463861s" Apr 24 23:40:24.086210 containerd[1458]: time="2026-04-24T23:40:24.086178899Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 24 23:40:24.091913 containerd[1458]: time="2026-04-24T23:40:24.091866395Z" level=info msg="CreateContainer within sandbox \"707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 24 23:40:24.101551 containerd[1458]: time="2026-04-24T23:40:24.101518657Z" level=info msg="CreateContainer within sandbox \"707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\"" Apr 24 23:40:24.102489 containerd[1458]: time="2026-04-24T23:40:24.101952321Z" level=info msg="StartContainer for \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\"" Apr 24 23:40:24.133909 systemd[1]: Started cri-containerd-138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d.scope - libcontainer container 138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d. Apr 24 23:40:24.158822 containerd[1458]: time="2026-04-24T23:40:24.158774085Z" level=info msg="StartContainer for \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\" returns successfully" Apr 24 23:40:24.800038 kubelet[2505]: E0424 23:40:24.799582 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:24.853763 kubelet[2505]: E0424 23:40:24.853630 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:24.858390 kubelet[2505]: E0424 23:40:24.858018 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:24.862729 containerd[1458]: time="2026-04-24T23:40:24.862676081Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 23:40:24.883142 containerd[1458]: time="2026-04-24T23:40:24.883060072Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\"" Apr 24 23:40:24.884842 containerd[1458]: time="2026-04-24T23:40:24.883839949Z" level=info msg="StartContainer for \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\"" Apr 24 23:40:24.890579 kubelet[2505]: I0424 23:40:24.890490 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-dg727" podStartSLOduration=1.433483093 podStartE2EDuration="8.890436171s" podCreationTimestamp="2026-04-24 23:40:16 +0000 UTC" firstStartedPulling="2026-04-24 23:40:16.630414151 +0000 UTC m=+7.124624933" lastFinishedPulling="2026-04-24 23:40:24.08736723 +0000 UTC m=+14.581578011" observedRunningTime="2026-04-24 23:40:24.888813994 +0000 UTC m=+15.383024779" watchObservedRunningTime="2026-04-24 23:40:24.890436171 +0000 UTC m=+15.384646962" Apr 24 23:40:24.942663 systemd[1]: Started cri-containerd-3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b.scope - libcontainer container 3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b. Apr 24 23:40:24.966120 systemd[1]: cri-containerd-3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b.scope: Deactivated successfully. Apr 24 23:40:24.976379 containerd[1458]: time="2026-04-24T23:40:24.968748197Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a15d670_4e6f_4959_8eb9_6c1c68d673df.slice/cri-containerd-3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b.scope/memory.events\": no such file or directory" Apr 24 23:40:24.992099 containerd[1458]: time="2026-04-24T23:40:24.991582686Z" level=info msg="StartContainer for \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\" returns successfully" Apr 24 23:40:25.063296 containerd[1458]: time="2026-04-24T23:40:25.062096998Z" level=info msg="shim disconnected" id=3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b namespace=k8s.io Apr 24 23:40:25.063296 containerd[1458]: time="2026-04-24T23:40:25.062196741Z" level=warning msg="cleaning up after shim disconnected" id=3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b namespace=k8s.io Apr 24 23:40:25.063296 containerd[1458]: time="2026-04-24T23:40:25.062207360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:25.102656 containerd[1458]: time="2026-04-24T23:40:25.102563195Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:40:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:40:25.471115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b-rootfs.mount: Deactivated successfully. Apr 24 23:40:25.740147 kubelet[2505]: E0424 23:40:25.739995 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:25.862963 kubelet[2505]: E0424 23:40:25.862874 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:25.863611 kubelet[2505]: E0424 23:40:25.862990 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:25.870267 containerd[1458]: time="2026-04-24T23:40:25.870075097Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 23:40:25.887139 containerd[1458]: time="2026-04-24T23:40:25.887098203Z" level=info msg="CreateContainer within sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\"" Apr 24 23:40:25.887762 containerd[1458]: time="2026-04-24T23:40:25.887708353Z" level=info msg="StartContainer for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\"" Apr 24 23:40:25.914682 systemd[1]: Started cri-containerd-2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff.scope - libcontainer container 2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff. Apr 24 23:40:25.936830 containerd[1458]: time="2026-04-24T23:40:25.936793335Z" level=info msg="StartContainer for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" returns successfully" Apr 24 23:40:26.071155 kubelet[2505]: I0424 23:40:26.071079 2505 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 24 23:40:26.112589 systemd[1]: Created slice kubepods-burstable-pod4f10a978_7ea9_4eec_8ca9_22f8c479a14f.slice - libcontainer container kubepods-burstable-pod4f10a978_7ea9_4eec_8ca9_22f8c479a14f.slice. Apr 24 23:40:26.118522 systemd[1]: Created slice kubepods-burstable-pod75f91f05_b70d_4090_84ef_79b26f72d5b6.slice - libcontainer container kubepods-burstable-pod75f91f05_b70d_4090_84ef_79b26f72d5b6.slice. Apr 24 23:40:26.241485 kubelet[2505]: I0424 23:40:26.241361 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f10a978-7ea9-4eec-8ca9-22f8c479a14f-config-volume\") pod \"coredns-7d764666f9-xgblw\" (UID: \"4f10a978-7ea9-4eec-8ca9-22f8c479a14f\") " pod="kube-system/coredns-7d764666f9-xgblw" Apr 24 23:40:26.241832 kubelet[2505]: I0424 23:40:26.241449 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzb8\" (UniqueName: \"kubernetes.io/projected/75f91f05-b70d-4090-84ef-79b26f72d5b6-kube-api-access-klzb8\") pod \"coredns-7d764666f9-jnm2h\" (UID: \"75f91f05-b70d-4090-84ef-79b26f72d5b6\") " pod="kube-system/coredns-7d764666f9-jnm2h" Apr 24 23:40:26.241832 kubelet[2505]: I0424 23:40:26.241653 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqfs\" (UniqueName: \"kubernetes.io/projected/4f10a978-7ea9-4eec-8ca9-22f8c479a14f-kube-api-access-xhqfs\") pod \"coredns-7d764666f9-xgblw\" (UID: \"4f10a978-7ea9-4eec-8ca9-22f8c479a14f\") " pod="kube-system/coredns-7d764666f9-xgblw" Apr 24 23:40:26.241832 kubelet[2505]: I0424 23:40:26.241717 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f91f05-b70d-4090-84ef-79b26f72d5b6-config-volume\") pod \"coredns-7d764666f9-jnm2h\" (UID: \"75f91f05-b70d-4090-84ef-79b26f72d5b6\") " pod="kube-system/coredns-7d764666f9-jnm2h" Apr 24 23:40:26.421005 kubelet[2505]: E0424 23:40:26.420859 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:26.422797 kubelet[2505]: E0424 23:40:26.422773 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:26.433420 containerd[1458]: time="2026-04-24T23:40:26.433356532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jnm2h,Uid:75f91f05-b70d-4090-84ef-79b26f72d5b6,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:26.433984 containerd[1458]: time="2026-04-24T23:40:26.433934777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xgblw,Uid:4f10a978-7ea9-4eec-8ca9-22f8c479a14f,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:26.869142 kubelet[2505]: E0424 23:40:26.869084 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:26.886030 kubelet[2505]: I0424 23:40:26.885973 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-s7fs6" podStartSLOduration=1.474114121 podStartE2EDuration="10.885962496s" podCreationTimestamp="2026-04-24 23:40:16 +0000 UTC" firstStartedPulling="2026-04-24 23:40:16.451768482 +0000 UTC m=+6.945979265" lastFinishedPulling="2026-04-24 23:40:25.863616857 +0000 UTC m=+16.357827640" observedRunningTime="2026-04-24 23:40:26.885672389 +0000 UTC m=+17.379883189" watchObservedRunningTime="2026-04-24 23:40:26.885962496 +0000 UTC m=+17.380173295" Apr 24 23:40:27.811667 systemd-networkd[1379]: cilium_host: Link UP Apr 24 23:40:27.811769 systemd-networkd[1379]: cilium_net: Link UP Apr 24 23:40:27.811772 systemd-networkd[1379]: cilium_net: Gained carrier Apr 24 23:40:27.812395 systemd-networkd[1379]: cilium_host: Gained carrier Apr 24 23:40:27.870165 kubelet[2505]: E0424 23:40:27.870135 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:27.886697 systemd-networkd[1379]: cilium_vxlan: Link UP Apr 24 23:40:27.886704 systemd-networkd[1379]: cilium_vxlan: Gained carrier Apr 24 23:40:28.072494 kernel: NET: Registered PF_ALG protocol family Apr 24 23:40:28.271643 systemd-networkd[1379]: cilium_host: Gained IPv6LL Apr 24 23:40:28.609128 systemd-networkd[1379]: lxc_health: Link UP Apr 24 23:40:28.614204 systemd-networkd[1379]: lxc_health: Gained carrier Apr 24 23:40:28.815999 systemd-networkd[1379]: cilium_net: Gained IPv6LL Apr 24 23:40:28.872246 kubelet[2505]: E0424 23:40:28.872094 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:29.052240 systemd-networkd[1379]: lxc5ca2519a252e: Link UP Apr 24 23:40:29.060494 kernel: eth0: renamed from tmp09bc0 Apr 24 23:40:29.063024 systemd-networkd[1379]: lxc5ca2519a252e: Gained carrier Apr 24 23:40:29.067963 systemd-networkd[1379]: lxc73f843505b2a: Link UP Apr 24 23:40:29.075506 kernel: eth0: renamed from tmp88ae1 Apr 24 23:40:29.083304 systemd-networkd[1379]: lxc73f843505b2a: Gained carrier Apr 24 23:40:29.330767 systemd-networkd[1379]: cilium_vxlan: Gained IPv6LL Apr 24 23:40:29.968661 systemd-networkd[1379]: lxc_health: Gained IPv6LL Apr 24 23:40:30.287726 systemd-networkd[1379]: lxc73f843505b2a: Gained IPv6LL Apr 24 23:40:30.383750 kubelet[2505]: E0424 23:40:30.383666 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:30.799805 systemd-networkd[1379]: lxc5ca2519a252e: Gained IPv6LL Apr 24 23:40:31.198609 update_engine[1442]: I20260424 23:40:31.198061 1442 update_attempter.cc:509] Updating boot flags... Apr 24 23:40:31.217698 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3362) Apr 24 23:40:31.247609 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3362) Apr 24 23:40:31.272915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3362) Apr 24 23:40:32.078794 containerd[1458]: time="2026-04-24T23:40:32.078635890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:32.078794 containerd[1458]: time="2026-04-24T23:40:32.078675607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:32.078794 containerd[1458]: time="2026-04-24T23:40:32.078687534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:32.078794 containerd[1458]: time="2026-04-24T23:40:32.078736961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:32.088294 containerd[1458]: time="2026-04-24T23:40:32.087398275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:32.088294 containerd[1458]: time="2026-04-24T23:40:32.088034994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:32.088294 containerd[1458]: time="2026-04-24T23:40:32.088049436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:32.088294 containerd[1458]: time="2026-04-24T23:40:32.088124314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:32.097166 systemd[1]: run-containerd-runc-k8s.io-88ae1f8ef1d2405c7805048aee3b1555a12c7fa049ca128f9f6bdbfaa44bf567-runc.YcMPkF.mount: Deactivated successfully. Apr 24 23:40:32.101645 systemd[1]: Started cri-containerd-88ae1f8ef1d2405c7805048aee3b1555a12c7fa049ca128f9f6bdbfaa44bf567.scope - libcontainer container 88ae1f8ef1d2405c7805048aee3b1555a12c7fa049ca128f9f6bdbfaa44bf567. Apr 24 23:40:32.104800 systemd[1]: Started cri-containerd-09bc020093b1fc3ce82d7f0fd0c305e5b61bcc17f0519f5e0457ccb42c84da9c.scope - libcontainer container 09bc020093b1fc3ce82d7f0fd0c305e5b61bcc17f0519f5e0457ccb42c84da9c. Apr 24 23:40:32.111988 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:40:32.113701 systemd-resolved[1381]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 23:40:32.142549 containerd[1458]: time="2026-04-24T23:40:32.142269278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xgblw,Uid:4f10a978-7ea9-4eec-8ca9-22f8c479a14f,Namespace:kube-system,Attempt:0,} returns sandbox id \"09bc020093b1fc3ce82d7f0fd0c305e5b61bcc17f0519f5e0457ccb42c84da9c\"" Apr 24 23:40:32.142549 containerd[1458]: time="2026-04-24T23:40:32.142280451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-jnm2h,Uid:75f91f05-b70d-4090-84ef-79b26f72d5b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"88ae1f8ef1d2405c7805048aee3b1555a12c7fa049ca128f9f6bdbfaa44bf567\"" Apr 24 23:40:32.143266 kubelet[2505]: E0424 23:40:32.143216 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:32.144527 kubelet[2505]: E0424 23:40:32.144308 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:32.157487 containerd[1458]: time="2026-04-24T23:40:32.157383051Z" level=info msg="CreateContainer within sandbox \"88ae1f8ef1d2405c7805048aee3b1555a12c7fa049ca128f9f6bdbfaa44bf567\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:40:32.159427 containerd[1458]: time="2026-04-24T23:40:32.159388558Z" level=info msg="CreateContainer within sandbox \"09bc020093b1fc3ce82d7f0fd0c305e5b61bcc17f0519f5e0457ccb42c84da9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:40:32.177539 containerd[1458]: time="2026-04-24T23:40:32.176696139Z" level=info msg="CreateContainer within sandbox \"09bc020093b1fc3ce82d7f0fd0c305e5b61bcc17f0519f5e0457ccb42c84da9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"defa12ba2242e058b212ab7fc857482ba3d8d713b902076f792117dc14c4556b\"" Apr 24 23:40:32.178913 containerd[1458]: time="2026-04-24T23:40:32.177823064Z" level=info msg="StartContainer for \"defa12ba2242e058b212ab7fc857482ba3d8d713b902076f792117dc14c4556b\"" Apr 24 23:40:32.189193 containerd[1458]: time="2026-04-24T23:40:32.189155255Z" level=info msg="CreateContainer within sandbox \"88ae1f8ef1d2405c7805048aee3b1555a12c7fa049ca128f9f6bdbfaa44bf567\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b475ee79bc53b6a00251a0234a91eaf7708172f4e382c008c3b43712eb67e790\"" Apr 24 23:40:32.190536 containerd[1458]: time="2026-04-24T23:40:32.189834415Z" level=info msg="StartContainer for \"b475ee79bc53b6a00251a0234a91eaf7708172f4e382c008c3b43712eb67e790\"" Apr 24 23:40:32.208804 systemd[1]: Started cri-containerd-defa12ba2242e058b212ab7fc857482ba3d8d713b902076f792117dc14c4556b.scope - libcontainer container defa12ba2242e058b212ab7fc857482ba3d8d713b902076f792117dc14c4556b. Apr 24 23:40:32.211783 systemd[1]: Started cri-containerd-b475ee79bc53b6a00251a0234a91eaf7708172f4e382c008c3b43712eb67e790.scope - libcontainer container b475ee79bc53b6a00251a0234a91eaf7708172f4e382c008c3b43712eb67e790. Apr 24 23:40:32.234371 containerd[1458]: time="2026-04-24T23:40:32.234149684Z" level=info msg="StartContainer for \"defa12ba2242e058b212ab7fc857482ba3d8d713b902076f792117dc14c4556b\" returns successfully" Apr 24 23:40:32.238034 containerd[1458]: time="2026-04-24T23:40:32.237987136Z" level=info msg="StartContainer for \"b475ee79bc53b6a00251a0234a91eaf7708172f4e382c008c3b43712eb67e790\" returns successfully" Apr 24 23:40:32.586214 kubelet[2505]: I0424 23:40:32.586124 2505 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:40:32.586777 kubelet[2505]: E0424 23:40:32.586587 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:32.884737 kubelet[2505]: E0424 23:40:32.884332 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:32.886489 kubelet[2505]: E0424 23:40:32.886402 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:32.886489 kubelet[2505]: E0424 23:40:32.886438 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:32.904115 kubelet[2505]: I0424 23:40:32.903959 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jnm2h" podStartSLOduration=16.903944928 podStartE2EDuration="16.903944928s" podCreationTimestamp="2026-04-24 23:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:32.903545247 +0000 UTC m=+23.397756033" watchObservedRunningTime="2026-04-24 23:40:32.903944928 +0000 UTC m=+23.398155735" Apr 24 23:40:32.940378 kubelet[2505]: I0424 23:40:32.940274 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xgblw" podStartSLOduration=16.940259245 podStartE2EDuration="16.940259245s" podCreationTimestamp="2026-04-24 23:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:32.940220013 +0000 UTC m=+23.434430802" watchObservedRunningTime="2026-04-24 23:40:32.940259245 +0000 UTC m=+23.434470045" Apr 24 23:40:33.369503 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:55450.service - OpenSSH per-connection server daemon (10.0.0.1:55450). Apr 24 23:40:33.410168 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 55450 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:33.411870 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:33.416381 systemd-logind[1438]: New session 8 of user core. Apr 24 23:40:33.424604 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:40:33.561986 sshd[3911]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:33.564699 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:55450.service: Deactivated successfully. Apr 24 23:40:33.566117 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:40:33.566620 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:40:33.567423 systemd-logind[1438]: Removed session 8. Apr 24 23:40:33.904957 kubelet[2505]: E0424 23:40:33.904912 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:33.905621 kubelet[2505]: E0424 23:40:33.905072 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:34.909902 kubelet[2505]: E0424 23:40:34.909763 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:34.910930 kubelet[2505]: E0424 23:40:34.909937 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:40:38.575297 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Apr 24 23:40:38.613633 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:38.615364 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:38.625965 systemd-logind[1438]: New session 9 of user core. Apr 24 23:40:38.640025 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:40:38.756834 sshd[3930]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:38.759815 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:33768.service: Deactivated successfully. Apr 24 23:40:38.761611 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:40:38.762134 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:40:38.763020 systemd-logind[1438]: Removed session 9. Apr 24 23:40:43.774147 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:33782.service - OpenSSH per-connection server daemon (10.0.0.1:33782). Apr 24 23:40:43.810123 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 33782 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:43.812332 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:43.819071 systemd-logind[1438]: New session 10 of user core. Apr 24 23:40:43.840647 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:40:43.941007 sshd[3945]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:43.943795 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:33782.service: Deactivated successfully. Apr 24 23:40:43.945236 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:40:43.945894 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:40:43.946645 systemd-logind[1438]: Removed session 10. Apr 24 23:40:48.955902 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:52550.service - OpenSSH per-connection server daemon (10.0.0.1:52550). Apr 24 23:40:48.992433 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 52550 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:48.993535 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:48.997098 systemd-logind[1438]: New session 11 of user core. Apr 24 23:40:49.006594 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:40:49.114197 sshd[3964]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:49.123630 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:52550.service: Deactivated successfully. Apr 24 23:40:49.124858 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:40:49.125973 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:40:49.136735 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Apr 24 23:40:49.137434 systemd-logind[1438]: Removed session 11. Apr 24 23:40:49.172133 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:49.173245 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:49.177242 systemd-logind[1438]: New session 12 of user core. Apr 24 23:40:49.183627 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:40:49.318341 sshd[3979]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:49.326482 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:52556.service: Deactivated successfully. Apr 24 23:40:49.328044 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:40:49.330098 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:40:49.340021 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:52560.service - OpenSSH per-connection server daemon (10.0.0.1:52560). Apr 24 23:40:49.342619 systemd-logind[1438]: Removed session 12. Apr 24 23:40:49.377527 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 52560 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:49.378805 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:49.382512 systemd-logind[1438]: New session 13 of user core. Apr 24 23:40:49.389619 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:40:49.493781 sshd[3992]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:49.496307 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:52560.service: Deactivated successfully. Apr 24 23:40:49.497731 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:40:49.498335 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:40:49.499136 systemd-logind[1438]: Removed session 13. Apr 24 23:40:54.504634 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:52570.service - OpenSSH per-connection server daemon (10.0.0.1:52570). Apr 24 23:40:54.539804 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 52570 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:54.540804 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:54.543867 systemd-logind[1438]: New session 14 of user core. Apr 24 23:40:54.552605 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:40:54.655010 sshd[4008]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:54.657866 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:52570.service: Deactivated successfully. Apr 24 23:40:54.659412 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:40:54.659950 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:40:54.660668 systemd-logind[1438]: Removed session 14. Apr 24 23:40:59.671481 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:54614.service - OpenSSH per-connection server daemon (10.0.0.1:54614). Apr 24 23:40:59.709828 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 54614 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:59.710946 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:59.714408 systemd-logind[1438]: New session 15 of user core. Apr 24 23:40:59.718587 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:40:59.816282 sshd[4022]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:59.831602 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:54614.service: Deactivated successfully. Apr 24 23:40:59.832860 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:40:59.833959 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:40:59.835156 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:54624.service - OpenSSH per-connection server daemon (10.0.0.1:54624). Apr 24 23:40:59.836022 systemd-logind[1438]: Removed session 15. Apr 24 23:40:59.869556 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 54624 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:40:59.870531 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:59.873641 systemd-logind[1438]: New session 16 of user core. Apr 24 23:40:59.886586 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:41:00.054171 sshd[4037]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:00.068603 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:54624.service: Deactivated successfully. Apr 24 23:41:00.069845 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:41:00.070946 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:41:00.071974 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:54630.service - OpenSSH per-connection server daemon (10.0.0.1:54630). Apr 24 23:41:00.072552 systemd-logind[1438]: Removed session 16. Apr 24 23:41:00.111175 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 54630 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:00.112273 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:00.115561 systemd-logind[1438]: New session 17 of user core. Apr 24 23:41:00.121596 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:41:00.503237 sshd[4050]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:00.512011 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:54630.service: Deactivated successfully. Apr 24 23:41:00.513622 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:41:00.515945 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:41:00.522039 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:54644.service - OpenSSH per-connection server daemon (10.0.0.1:54644). Apr 24 23:41:00.525912 systemd-logind[1438]: Removed session 17. Apr 24 23:41:00.559446 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 54644 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:00.560667 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:00.564443 systemd-logind[1438]: New session 18 of user core. Apr 24 23:41:00.572610 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:41:00.768973 sshd[4069]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:00.776859 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:54644.service: Deactivated successfully. Apr 24 23:41:00.778231 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:41:00.779554 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:41:00.785171 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:54658.service - OpenSSH per-connection server daemon (10.0.0.1:54658). Apr 24 23:41:00.786434 systemd-logind[1438]: Removed session 18. Apr 24 23:41:00.817587 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 54658 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:00.818614 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:00.821706 systemd-logind[1438]: New session 19 of user core. Apr 24 23:41:00.832624 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:41:00.937951 sshd[4082]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:00.940986 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:54658.service: Deactivated successfully. Apr 24 23:41:00.943295 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:41:00.943972 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:41:00.945194 systemd-logind[1438]: Removed session 19. Apr 24 23:41:05.954065 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:54666.service - OpenSSH per-connection server daemon (10.0.0.1:54666). Apr 24 23:41:05.991632 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 54666 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:05.993204 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:05.996883 systemd-logind[1438]: New session 20 of user core. Apr 24 23:41:06.002628 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:41:06.115195 sshd[4101]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:06.118344 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:54666.service: Deactivated successfully. Apr 24 23:41:06.120840 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:41:06.121444 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:41:06.122283 systemd-logind[1438]: Removed session 20. Apr 24 23:41:11.131987 systemd[1]: Started sshd@20-10.0.0.62:22-10.0.0.1:35466.service - OpenSSH per-connection server daemon (10.0.0.1:35466). Apr 24 23:41:11.174151 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 35466 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:11.175584 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:11.179779 systemd-logind[1438]: New session 21 of user core. Apr 24 23:41:11.192954 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:41:11.337566 sshd[4118]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:11.340990 systemd[1]: sshd@20-10.0.0.62:22-10.0.0.1:35466.service: Deactivated successfully. Apr 24 23:41:11.342830 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:41:11.343440 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:41:11.344532 systemd-logind[1438]: Removed session 21. Apr 24 23:41:16.358499 systemd[1]: Started sshd@21-10.0.0.62:22-10.0.0.1:36146.service - OpenSSH per-connection server daemon (10.0.0.1:36146). Apr 24 23:41:16.394717 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 36146 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:16.396286 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:16.400170 systemd-logind[1438]: New session 22 of user core. Apr 24 23:41:16.407646 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:41:16.507332 sshd[4132]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:16.516547 systemd[1]: sshd@21-10.0.0.62:22-10.0.0.1:36146.service: Deactivated successfully. Apr 24 23:41:16.517764 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:41:16.518834 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:41:16.519891 systemd[1]: Started sshd@22-10.0.0.62:22-10.0.0.1:36162.service - OpenSSH per-connection server daemon (10.0.0.1:36162). Apr 24 23:41:16.520532 systemd-logind[1438]: Removed session 22. Apr 24 23:41:16.554936 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 36162 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:16.555916 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:16.558992 systemd-logind[1438]: New session 23 of user core. Apr 24 23:41:16.566647 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 23:41:17.920351 containerd[1458]: time="2026-04-24T23:41:17.920266849Z" level=info msg="StopContainer for \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\" with timeout 30 (s)" Apr 24 23:41:17.921493 containerd[1458]: time="2026-04-24T23:41:17.921431190Z" level=info msg="Stop container \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\" with signal terminated" Apr 24 23:41:17.943882 systemd[1]: cri-containerd-138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d.scope: Deactivated successfully. Apr 24 23:41:17.956482 containerd[1458]: time="2026-04-24T23:41:17.956339463Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:41:17.959818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d-rootfs.mount: Deactivated successfully. Apr 24 23:41:17.964375 containerd[1458]: time="2026-04-24T23:41:17.964340021Z" level=info msg="StopContainer for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" with timeout 2 (s)" Apr 24 23:41:17.966054 containerd[1458]: time="2026-04-24T23:41:17.966005743Z" level=info msg="Stop container \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" with signal terminated" Apr 24 23:41:17.966292 containerd[1458]: time="2026-04-24T23:41:17.966142291Z" level=info msg="shim disconnected" id=138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d namespace=k8s.io Apr 24 23:41:17.966292 containerd[1458]: time="2026-04-24T23:41:17.966183041Z" level=warning msg="cleaning up after shim disconnected" id=138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d namespace=k8s.io Apr 24 23:41:17.966292 containerd[1458]: time="2026-04-24T23:41:17.966188985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:17.972942 systemd-networkd[1379]: lxc_health: Link DOWN Apr 24 23:41:17.972954 systemd-networkd[1379]: lxc_health: Lost carrier Apr 24 23:41:17.980245 containerd[1458]: time="2026-04-24T23:41:17.980203472Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:41:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:41:17.984337 containerd[1458]: time="2026-04-24T23:41:17.984295953Z" level=info msg="StopContainer for \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\" returns successfully" Apr 24 23:41:17.985503 containerd[1458]: time="2026-04-24T23:41:17.985483094Z" level=info msg="StopPodSandbox for \"707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569\"" Apr 24 23:41:17.985692 containerd[1458]: time="2026-04-24T23:41:17.985514998Z" level=info msg="Container to stop \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:41:17.986828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569-shm.mount: Deactivated successfully. Apr 24 23:41:17.993567 systemd[1]: cri-containerd-2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff.scope: Deactivated successfully. Apr 24 23:41:17.993859 systemd[1]: cri-containerd-2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff.scope: Consumed 5.575s CPU time. Apr 24 23:41:17.995868 systemd[1]: cri-containerd-707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569.scope: Deactivated successfully. Apr 24 23:41:18.014499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff-rootfs.mount: Deactivated successfully. Apr 24 23:41:18.016381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569-rootfs.mount: Deactivated successfully. Apr 24 23:41:18.023395 containerd[1458]: time="2026-04-24T23:41:18.023309866Z" level=info msg="shim disconnected" id=2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff namespace=k8s.io Apr 24 23:41:18.023395 containerd[1458]: time="2026-04-24T23:41:18.023380122Z" level=warning msg="cleaning up after shim disconnected" id=2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff namespace=k8s.io Apr 24 23:41:18.023395 containerd[1458]: time="2026-04-24T23:41:18.023387720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:18.023395 containerd[1458]: time="2026-04-24T23:41:18.023347232Z" level=info msg="shim disconnected" id=707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569 namespace=k8s.io Apr 24 23:41:18.023395 containerd[1458]: time="2026-04-24T23:41:18.023604658Z" level=warning msg="cleaning up after shim disconnected" id=707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569 namespace=k8s.io Apr 24 23:41:18.023395 containerd[1458]: time="2026-04-24T23:41:18.023611497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:18.036058 containerd[1458]: time="2026-04-24T23:41:18.035874871Z" level=info msg="TearDown network for sandbox \"707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569\" successfully" Apr 24 23:41:18.036058 containerd[1458]: time="2026-04-24T23:41:18.035897455Z" level=info msg="StopPodSandbox for \"707d958604e1ff7e09973bd6d25e40b4f76e27bfd4082dd3786262bb670fa569\" returns successfully" Apr 24 23:41:18.040061 containerd[1458]: time="2026-04-24T23:41:18.040018710Z" level=info msg="StopContainer for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" returns successfully" Apr 24 23:41:18.040382 containerd[1458]: time="2026-04-24T23:41:18.040351233Z" level=info msg="StopPodSandbox for \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\"" Apr 24 23:41:18.040428 containerd[1458]: time="2026-04-24T23:41:18.040395063Z" level=info msg="Container to stop \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:41:18.040428 containerd[1458]: time="2026-04-24T23:41:18.040404796Z" level=info msg="Container to stop \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:41:18.040428 containerd[1458]: time="2026-04-24T23:41:18.040411469Z" level=info msg="Container to stop \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:41:18.040428 containerd[1458]: time="2026-04-24T23:41:18.040418473Z" level=info msg="Container to stop \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:41:18.040428 containerd[1458]: time="2026-04-24T23:41:18.040427008Z" level=info msg="Container to stop \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:41:18.046032 systemd[1]: cri-containerd-e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a.scope: Deactivated successfully. Apr 24 23:41:18.073085 containerd[1458]: time="2026-04-24T23:41:18.073015113Z" level=info msg="shim disconnected" id=e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a namespace=k8s.io Apr 24 23:41:18.073085 containerd[1458]: time="2026-04-24T23:41:18.073070568Z" level=warning msg="cleaning up after shim disconnected" id=e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a namespace=k8s.io Apr 24 23:41:18.073085 containerd[1458]: time="2026-04-24T23:41:18.073077813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:18.085684 containerd[1458]: time="2026-04-24T23:41:18.085627584Z" level=info msg="TearDown network for sandbox \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" successfully" Apr 24 23:41:18.085684 containerd[1458]: time="2026-04-24T23:41:18.085660659Z" level=info msg="StopPodSandbox for \"e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a\" returns successfully" Apr 24 23:41:18.245108 kubelet[2505]: I0424 23:41:18.244687 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bca8add5-f396-40b5-a164-6ab8c8f595a1-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bca8add5-f396-40b5-a164-6ab8c8f595a1-cilium-config-path\") pod \"bca8add5-f396-40b5-a164-6ab8c8f595a1\" (UID: \"bca8add5-f396-40b5-a164-6ab8c8f595a1\") " Apr 24 23:41:18.245108 kubelet[2505]: I0424 23:41:18.244724 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bca8add5-f396-40b5-a164-6ab8c8f595a1-kube-api-access-dxjpq\" (UniqueName: \"kubernetes.io/projected/bca8add5-f396-40b5-a164-6ab8c8f595a1-kube-api-access-dxjpq\") pod \"bca8add5-f396-40b5-a164-6ab8c8f595a1\" (UID: \"bca8add5-f396-40b5-a164-6ab8c8f595a1\") " Apr 24 23:41:18.245108 kubelet[2505]: I0424 23:41:18.244740 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/6a15d670-4e6f-4959-8eb9-6c1c68d673df-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a15d670-4e6f-4959-8eb9-6c1c68d673df-clustermesh-secrets\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245108 kubelet[2505]: I0424 23:41:18.244792 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-config-path\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245108 kubelet[2505]: I0424 23:41:18.244807 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-net\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245654 kubelet[2505]: I0424 23:41:18.244819 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-cgroup\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245654 kubelet[2505]: I0424 23:41:18.244831 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-bpf-maps\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245654 kubelet[2505]: I0424 23:41:18.244843 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-lib-modules\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245654 kubelet[2505]: I0424 23:41:18.244856 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-kernel\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245654 kubelet[2505]: I0424 23:41:18.244910 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-xtables-lock\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245740 kubelet[2505]: I0424 23:41:18.244921 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-run\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245740 kubelet[2505]: I0424 23:41:18.244934 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hubble-tls\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245740 kubelet[2505]: I0424 23:41:18.244944 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cni-path\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cni-path\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245740 kubelet[2505]: I0424 23:41:18.244957 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-kube-api-access-7sjpv\" (UniqueName: \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-kube-api-access-7sjpv\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245740 kubelet[2505]: I0424 23:41:18.244969 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hostproc\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hostproc\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245823 kubelet[2505]: I0424 23:41:18.244979 2505 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-etc-cni-netd\") pod \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\" (UID: \"6a15d670-4e6f-4959-8eb9-6c1c68d673df\") " Apr 24 23:41:18.245823 kubelet[2505]: I0424 23:41:18.245023 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-etc-cni-netd" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.245823 kubelet[2505]: I0424 23:41:18.245212 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-kernel" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.246754 kubelet[2505]: I0424 23:41:18.246728 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca8add5-f396-40b5-a164-6ab8c8f595a1-cilium-config-path" pod "bca8add5-f396-40b5-a164-6ab8c8f595a1" (UID: "bca8add5-f396-40b5-a164-6ab8c8f595a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:41:18.247352 kubelet[2505]: I0424 23:41:18.246926 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-config-path" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:41:18.247352 kubelet[2505]: I0424 23:41:18.246952 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-net" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.247352 kubelet[2505]: I0424 23:41:18.246963 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-cgroup" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.247352 kubelet[2505]: I0424 23:41:18.246973 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-bpf-maps" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.247352 kubelet[2505]: I0424 23:41:18.246983 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-lib-modules" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.247500 kubelet[2505]: I0424 23:41:18.246995 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-xtables-lock" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.248489 kubelet[2505]: I0424 23:41:18.248440 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-run" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.248677 kubelet[2505]: I0424 23:41:18.248447 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hostproc" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.249269 kubelet[2505]: I0424 23:41:18.248587 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cni-path" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:41:18.249710 kubelet[2505]: I0424 23:41:18.249680 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a15d670-4e6f-4959-8eb9-6c1c68d673df-clustermesh-secrets" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:41:18.249989 kubelet[2505]: I0424 23:41:18.249968 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hubble-tls" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:41:18.250042 kubelet[2505]: I0424 23:41:18.249991 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca8add5-f396-40b5-a164-6ab8c8f595a1-kube-api-access-dxjpq" pod "bca8add5-f396-40b5-a164-6ab8c8f595a1" (UID: "bca8add5-f396-40b5-a164-6ab8c8f595a1"). InnerVolumeSpecName "kube-api-access-dxjpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:41:18.250491 kubelet[2505]: I0424 23:41:18.250439 2505 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-kube-api-access-7sjpv" pod "6a15d670-4e6f-4959-8eb9-6c1c68d673df" (UID: "6a15d670-4e6f-4959-8eb9-6c1c68d673df"). InnerVolumeSpecName "kube-api-access-7sjpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:41:18.345751 kubelet[2505]: I0424 23:41:18.345719 2505 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345751 kubelet[2505]: I0424 23:41:18.345746 2505 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345751 kubelet[2505]: I0424 23:41:18.345753 2505 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345751 kubelet[2505]: I0424 23:41:18.345761 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7sjpv\" (UniqueName: \"kubernetes.io/projected/6a15d670-4e6f-4959-8eb9-6c1c68d673df-kube-api-access-7sjpv\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345770 2505 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345776 2505 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345781 2505 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bca8add5-f396-40b5-a164-6ab8c8f595a1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345787 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxjpq\" (UniqueName: \"kubernetes.io/projected/bca8add5-f396-40b5-a164-6ab8c8f595a1-kube-api-access-dxjpq\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345793 2505 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a15d670-4e6f-4959-8eb9-6c1c68d673df-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345798 2505 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345803 2505 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.345941 kubelet[2505]: I0424 23:41:18.345808 2505 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.346064 kubelet[2505]: I0424 23:41:18.345813 2505 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.346064 kubelet[2505]: I0424 23:41:18.345818 2505 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.346064 kubelet[2505]: I0424 23:41:18.345823 2505 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.346064 kubelet[2505]: I0424 23:41:18.345829 2505 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a15d670-4e6f-4959-8eb9-6c1c68d673df-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 24 23:41:18.935840 systemd[1]: var-lib-kubelet-pods-bca8add5\x2df396\x2d40b5\x2da164\x2d6ab8c8f595a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxjpq.mount: Deactivated successfully. Apr 24 23:41:18.935957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a-rootfs.mount: Deactivated successfully. Apr 24 23:41:18.936000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e110b579610f93f52a87e9cb2088b748889ef04244264072058aa5d662587d3a-shm.mount: Deactivated successfully. Apr 24 23:41:18.936044 systemd[1]: var-lib-kubelet-pods-6a15d670\x2d4e6f\x2d4959\x2d8eb9\x2d6c1c68d673df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7sjpv.mount: Deactivated successfully. Apr 24 23:41:18.936081 systemd[1]: var-lib-kubelet-pods-6a15d670\x2d4e6f\x2d4959\x2d8eb9\x2d6c1c68d673df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 24 23:41:18.936118 systemd[1]: var-lib-kubelet-pods-6a15d670\x2d4e6f\x2d4959\x2d8eb9\x2d6c1c68d673df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 24 23:41:19.038504 kubelet[2505]: I0424 23:41:19.038377 2505 scope.go:122] "RemoveContainer" containerID="138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d" Apr 24 23:41:19.040720 containerd[1458]: time="2026-04-24T23:41:19.040055391Z" level=info msg="RemoveContainer for \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\"" Apr 24 23:41:19.042388 systemd[1]: Removed slice kubepods-besteffort-podbca8add5_f396_40b5_a164_6ab8c8f595a1.slice - libcontainer container kubepods-besteffort-podbca8add5_f396_40b5_a164_6ab8c8f595a1.slice. Apr 24 23:41:19.044536 systemd[1]: Removed slice kubepods-burstable-pod6a15d670_4e6f_4959_8eb9_6c1c68d673df.slice - libcontainer container kubepods-burstable-pod6a15d670_4e6f_4959_8eb9_6c1c68d673df.slice. Apr 24 23:41:19.044614 systemd[1]: kubepods-burstable-pod6a15d670_4e6f_4959_8eb9_6c1c68d673df.slice: Consumed 5.655s CPU time. Apr 24 23:41:19.045552 containerd[1458]: time="2026-04-24T23:41:19.045523333Z" level=info msg="RemoveContainer for \"138b471c646c3276f95141a50870d76efb5dec676258d581b998f04f0c39034d\" returns successfully" Apr 24 23:41:19.045776 kubelet[2505]: I0424 23:41:19.045755 2505 scope.go:122] "RemoveContainer" containerID="2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff" Apr 24 23:41:19.046679 containerd[1458]: time="2026-04-24T23:41:19.046649908Z" level=info msg="RemoveContainer for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\"" Apr 24 23:41:19.062617 containerd[1458]: time="2026-04-24T23:41:19.062560518Z" level=info msg="RemoveContainer for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" returns successfully" Apr 24 23:41:19.063019 kubelet[2505]: I0424 23:41:19.062992 2505 scope.go:122] "RemoveContainer" containerID="3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b" Apr 24 23:41:19.064388 containerd[1458]: time="2026-04-24T23:41:19.064365433Z" level=info msg="RemoveContainer for \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\"" Apr 24 23:41:19.067987 containerd[1458]: time="2026-04-24T23:41:19.067836017Z" level=info msg="RemoveContainer for \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\" returns successfully" Apr 24 23:41:19.068702 kubelet[2505]: I0424 23:41:19.068489 2505 scope.go:122] "RemoveContainer" containerID="8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4" Apr 24 23:41:19.072243 containerd[1458]: time="2026-04-24T23:41:19.072051679Z" level=info msg="RemoveContainer for \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\"" Apr 24 23:41:19.076926 containerd[1458]: time="2026-04-24T23:41:19.076884992Z" level=info msg="RemoveContainer for \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\" returns successfully" Apr 24 23:41:19.077140 kubelet[2505]: I0424 23:41:19.077105 2505 scope.go:122] "RemoveContainer" containerID="619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb" Apr 24 23:41:19.079789 containerd[1458]: time="2026-04-24T23:41:19.079766323Z" level=info msg="RemoveContainer for \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\"" Apr 24 23:41:19.082657 containerd[1458]: time="2026-04-24T23:41:19.082589859Z" level=info msg="RemoveContainer for \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\" returns successfully" Apr 24 23:41:19.083107 kubelet[2505]: I0424 23:41:19.083083 2505 scope.go:122] "RemoveContainer" containerID="a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919" Apr 24 23:41:19.084330 containerd[1458]: time="2026-04-24T23:41:19.084307512Z" level=info msg="RemoveContainer for \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\"" Apr 24 23:41:19.087002 containerd[1458]: time="2026-04-24T23:41:19.086910209Z" level=info msg="RemoveContainer for \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\" returns successfully" Apr 24 23:41:19.087415 kubelet[2505]: I0424 23:41:19.087358 2505 scope.go:122] "RemoveContainer" containerID="2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff" Apr 24 23:41:19.091383 containerd[1458]: time="2026-04-24T23:41:19.091307705Z" level=error msg="ContainerStatus for \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\": not found" Apr 24 23:41:19.096877 kubelet[2505]: E0424 23:41:19.096807 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\": not found" containerID="2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff" Apr 24 23:41:19.096938 kubelet[2505]: I0424 23:41:19.096876 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff"} err="failed to get container status \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ac5665a14c5ea9070fac6bf7502e72a5980aaf3d7a8ed825e254740993d99ff\": not found" Apr 24 23:41:19.096938 kubelet[2505]: I0424 23:41:19.096907 2505 scope.go:122] "RemoveContainer" containerID="3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b" Apr 24 23:41:19.097099 containerd[1458]: time="2026-04-24T23:41:19.097069797Z" level=error msg="ContainerStatus for \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\": not found" Apr 24 23:41:19.097293 kubelet[2505]: E0424 23:41:19.097261 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\": not found" containerID="3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b" Apr 24 23:41:19.097322 kubelet[2505]: I0424 23:41:19.097286 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b"} err="failed to get container status \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3008e0b5ed407ff9a3ed2f4455e0323b701733cf3aaf227392d204cb3bbced4b\": not found" Apr 24 23:41:19.097322 kubelet[2505]: I0424 23:41:19.097301 2505 scope.go:122] "RemoveContainer" containerID="8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4" Apr 24 23:41:19.097544 containerd[1458]: time="2026-04-24T23:41:19.097518328Z" level=error msg="ContainerStatus for \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\": not found" Apr 24 23:41:19.097657 kubelet[2505]: E0424 23:41:19.097640 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\": not found" containerID="8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4" Apr 24 23:41:19.097680 kubelet[2505]: I0424 23:41:19.097666 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4"} err="failed to get container status \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cb621ba4853ccf5026669bc3334614d7526edf13aca31595c92cd184e3005a4\": not found" Apr 24 23:41:19.097680 kubelet[2505]: I0424 23:41:19.097678 2505 scope.go:122] "RemoveContainer" containerID="619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb" Apr 24 23:41:19.097867 containerd[1458]: time="2026-04-24T23:41:19.097837850Z" level=error msg="ContainerStatus for \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\": not found" Apr 24 23:41:19.097970 kubelet[2505]: E0424 23:41:19.097948 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\": not found" containerID="619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb" Apr 24 23:41:19.097989 kubelet[2505]: I0424 23:41:19.097972 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb"} err="failed to get container status \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"619ad38aae93df2f13a1f02ff305c7a8a7ba9e5ed2319716892af67debb7a9bb\": not found" Apr 24 23:41:19.098010 kubelet[2505]: I0424 23:41:19.097992 2505 scope.go:122] "RemoveContainer" containerID="a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919" Apr 24 23:41:19.098222 containerd[1458]: time="2026-04-24T23:41:19.098175220Z" level=error msg="ContainerStatus for \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\": not found" Apr 24 23:41:19.098324 kubelet[2505]: E0424 23:41:19.098300 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\": not found" containerID="a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919" Apr 24 23:41:19.098324 kubelet[2505]: I0424 23:41:19.098317 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919"} err="failed to get container status \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9f9468ae9795bceb0998ee8a13cda6b2c3874074b61957229f554c1f3f3d919\": not found" Apr 24 23:41:19.628532 kubelet[2505]: I0424 23:41:19.628388 2505 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6a15d670-4e6f-4959-8eb9-6c1c68d673df" path="/var/lib/kubelet/pods/6a15d670-4e6f-4959-8eb9-6c1c68d673df/volumes" Apr 24 23:41:19.629210 kubelet[2505]: I0424 23:41:19.629022 2505 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bca8add5-f396-40b5-a164-6ab8c8f595a1" path="/var/lib/kubelet/pods/bca8add5-f396-40b5-a164-6ab8c8f595a1/volumes" Apr 24 23:41:19.857493 kubelet[2505]: E0424 23:41:19.857278 2505 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 24 23:41:19.866890 sshd[4146]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:19.877502 systemd[1]: sshd@22-10.0.0.62:22-10.0.0.1:36162.service: Deactivated successfully. Apr 24 23:41:19.878639 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 23:41:19.879656 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Apr 24 23:41:19.880587 systemd[1]: Started sshd@23-10.0.0.62:22-10.0.0.1:36168.service - OpenSSH per-connection server daemon (10.0.0.1:36168). Apr 24 23:41:19.881749 systemd-logind[1438]: Removed session 23. Apr 24 23:41:19.919282 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 36168 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:19.920203 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:19.923359 systemd-logind[1438]: New session 24 of user core. Apr 24 23:41:19.939594 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 23:41:20.494835 sshd[4310]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:20.503472 systemd[1]: sshd@23-10.0.0.62:22-10.0.0.1:36168.service: Deactivated successfully. Apr 24 23:41:20.505968 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 23:41:20.508336 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Apr 24 23:41:20.519241 systemd[1]: Started sshd@24-10.0.0.62:22-10.0.0.1:36172.service - OpenSSH per-connection server daemon (10.0.0.1:36172). Apr 24 23:41:20.520943 systemd-logind[1438]: Removed session 24. Apr 24 23:41:20.532632 systemd[1]: Created slice kubepods-burstable-pod863fd685_ca58_49d8_b49b_ed78ca21f114.slice - libcontainer container kubepods-burstable-pod863fd685_ca58_49d8_b49b_ed78ca21f114.slice. Apr 24 23:41:20.558841 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 36172 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:20.559213 kubelet[2505]: I0424 23:41:20.558850 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-bpf-maps\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559213 kubelet[2505]: I0424 23:41:20.558877 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-cilium-cgroup\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559213 kubelet[2505]: I0424 23:41:20.558890 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/863fd685-ca58-49d8-b49b-ed78ca21f114-cilium-ipsec-secrets\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559213 kubelet[2505]: I0424 23:41:20.558903 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-host-proc-sys-net\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559213 kubelet[2505]: I0424 23:41:20.558915 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/863fd685-ca58-49d8-b49b-ed78ca21f114-hubble-tls\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559213 kubelet[2505]: I0424 23:41:20.558926 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-lib-modules\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559399 kubelet[2505]: I0424 23:41:20.558936 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-cilium-run\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559399 kubelet[2505]: I0424 23:41:20.558948 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-hostproc\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559399 kubelet[2505]: I0424 23:41:20.558960 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-cni-path\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559399 kubelet[2505]: I0424 23:41:20.558971 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-etc-cni-netd\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559399 kubelet[2505]: I0424 23:41:20.558981 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-xtables-lock\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559399 kubelet[2505]: I0424 23:41:20.558992 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/863fd685-ca58-49d8-b49b-ed78ca21f114-clustermesh-secrets\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559513 kubelet[2505]: I0424 23:41:20.559004 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/863fd685-ca58-49d8-b49b-ed78ca21f114-cilium-config-path\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559513 kubelet[2505]: I0424 23:41:20.559025 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/863fd685-ca58-49d8-b49b-ed78ca21f114-host-proc-sys-kernel\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.559513 kubelet[2505]: I0424 23:41:20.559036 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchqx\" (UniqueName: \"kubernetes.io/projected/863fd685-ca58-49d8-b49b-ed78ca21f114-kube-api-access-rchqx\") pod \"cilium-vvnv2\" (UID: \"863fd685-ca58-49d8-b49b-ed78ca21f114\") " pod="kube-system/cilium-vvnv2" Apr 24 23:41:20.560232 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:20.564510 systemd-logind[1438]: New session 25 of user core. Apr 24 23:41:20.570599 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 24 23:41:20.619702 sshd[4322]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:20.629266 systemd[1]: sshd@24-10.0.0.62:22-10.0.0.1:36172.service: Deactivated successfully. Apr 24 23:41:20.630391 systemd[1]: session-25.scope: Deactivated successfully. Apr 24 23:41:20.631396 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Apr 24 23:41:20.632297 systemd[1]: Started sshd@25-10.0.0.62:22-10.0.0.1:36184.service - OpenSSH per-connection server daemon (10.0.0.1:36184). Apr 24 23:41:20.633747 systemd-logind[1438]: Removed session 25. Apr 24 23:41:20.667191 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 36184 ssh2: RSA SHA256:6TS4vliro6dGRNKsEvxpr5tJ8Ujqm5fyS/jf5/T27qg Apr 24 23:41:20.668554 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:41:20.673905 systemd-logind[1438]: New session 26 of user core. Apr 24 23:41:20.678574 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 24 23:41:20.844259 kubelet[2505]: E0424 23:41:20.844095 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:20.845435 containerd[1458]: time="2026-04-24T23:41:20.845191521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvnv2,Uid:863fd685-ca58-49d8-b49b-ed78ca21f114,Namespace:kube-system,Attempt:0,}" Apr 24 23:41:20.865182 containerd[1458]: time="2026-04-24T23:41:20.864490243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:41:20.865182 containerd[1458]: time="2026-04-24T23:41:20.865156456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:41:20.865436 containerd[1458]: time="2026-04-24T23:41:20.865168707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:41:20.865436 containerd[1458]: time="2026-04-24T23:41:20.865260022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:41:20.886658 systemd[1]: Started cri-containerd-94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8.scope - libcontainer container 94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8. Apr 24 23:41:20.904338 containerd[1458]: time="2026-04-24T23:41:20.904285849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvnv2,Uid:863fd685-ca58-49d8-b49b-ed78ca21f114,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\"" Apr 24 23:41:20.904936 kubelet[2505]: E0424 23:41:20.904902 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:20.909928 containerd[1458]: time="2026-04-24T23:41:20.909883945Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 23:41:20.919751 containerd[1458]: time="2026-04-24T23:41:20.919636723Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b\"" Apr 24 23:41:20.920284 containerd[1458]: time="2026-04-24T23:41:20.920240517Z" level=info msg="StartContainer for \"1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b\"" Apr 24 23:41:20.945137 systemd[1]: Started cri-containerd-1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b.scope - libcontainer container 1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b. Apr 24 23:41:20.966825 containerd[1458]: time="2026-04-24T23:41:20.966759299Z" level=info msg="StartContainer for \"1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b\" returns successfully" Apr 24 23:41:20.976004 systemd[1]: cri-containerd-1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b.scope: Deactivated successfully. Apr 24 23:41:21.003997 containerd[1458]: time="2026-04-24T23:41:21.003905663Z" level=info msg="shim disconnected" id=1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b namespace=k8s.io Apr 24 23:41:21.003997 containerd[1458]: time="2026-04-24T23:41:21.003978448Z" level=warning msg="cleaning up after shim disconnected" id=1af97af641585c6d464abbb07b597485ccfaae406a6a910c7271f35bd7f9bd8b namespace=k8s.io Apr 24 23:41:21.003997 containerd[1458]: time="2026-04-24T23:41:21.003987976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:21.049922 kubelet[2505]: E0424 23:41:21.049869 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:21.058498 containerd[1458]: time="2026-04-24T23:41:21.056700517Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 23:41:21.071445 containerd[1458]: time="2026-04-24T23:41:21.071384267Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765\"" Apr 24 23:41:21.071974 containerd[1458]: time="2026-04-24T23:41:21.071934752Z" level=info msg="StartContainer for \"4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765\"" Apr 24 23:41:21.101633 systemd[1]: Started cri-containerd-4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765.scope - libcontainer container 4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765. Apr 24 23:41:21.120661 containerd[1458]: time="2026-04-24T23:41:21.120628686Z" level=info msg="StartContainer for \"4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765\" returns successfully" Apr 24 23:41:21.124963 systemd[1]: cri-containerd-4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765.scope: Deactivated successfully. Apr 24 23:41:21.146399 containerd[1458]: time="2026-04-24T23:41:21.146292548Z" level=info msg="shim disconnected" id=4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765 namespace=k8s.io Apr 24 23:41:21.146399 containerd[1458]: time="2026-04-24T23:41:21.146356756Z" level=warning msg="cleaning up after shim disconnected" id=4f46e4accb93b2d3cfc42f33529a2bed78f972ec6807334f4716fd6ef8580765 namespace=k8s.io Apr 24 23:41:21.146399 containerd[1458]: time="2026-04-24T23:41:21.146379028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:21.253402 kubelet[2505]: I0424 23:41:21.253250 2505 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-24T23:41:21Z","lastTransitionTime":"2026-04-24T23:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 24 23:41:22.054205 kubelet[2505]: E0424 23:41:22.054153 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:22.060569 containerd[1458]: time="2026-04-24T23:41:22.060518055Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 23:41:22.077779 containerd[1458]: time="2026-04-24T23:41:22.077739286Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115\"" Apr 24 23:41:22.078139 containerd[1458]: time="2026-04-24T23:41:22.078119462Z" level=info msg="StartContainer for \"487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115\"" Apr 24 23:41:22.101635 systemd[1]: Started cri-containerd-487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115.scope - libcontainer container 487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115. Apr 24 23:41:22.122914 containerd[1458]: time="2026-04-24T23:41:22.122860919Z" level=info msg="StartContainer for \"487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115\" returns successfully" Apr 24 23:41:22.126082 systemd[1]: cri-containerd-487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115.scope: Deactivated successfully. Apr 24 23:41:22.149684 containerd[1458]: time="2026-04-24T23:41:22.149583151Z" level=info msg="shim disconnected" id=487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115 namespace=k8s.io Apr 24 23:41:22.149684 containerd[1458]: time="2026-04-24T23:41:22.149636360Z" level=warning msg="cleaning up after shim disconnected" id=487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115 namespace=k8s.io Apr 24 23:41:22.149684 containerd[1458]: time="2026-04-24T23:41:22.149649149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:22.665220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-487393597008248aa673ed9e42ca2dcbb8ee2fde7da523f5471a3c033d4d3115-rootfs.mount: Deactivated successfully. Apr 24 23:41:23.058986 kubelet[2505]: E0424 23:41:23.058959 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:23.064977 containerd[1458]: time="2026-04-24T23:41:23.064920689Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 23:41:23.075467 containerd[1458]: time="2026-04-24T23:41:23.075420516Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33\"" Apr 24 23:41:23.076154 containerd[1458]: time="2026-04-24T23:41:23.076096536Z" level=info msg="StartContainer for \"8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33\"" Apr 24 23:41:23.101620 systemd[1]: Started cri-containerd-8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33.scope - libcontainer container 8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33. Apr 24 23:41:23.119624 systemd[1]: cri-containerd-8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33.scope: Deactivated successfully. Apr 24 23:41:23.121789 containerd[1458]: time="2026-04-24T23:41:23.121754301Z" level=info msg="StartContainer for \"8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33\" returns successfully" Apr 24 23:41:23.137752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33-rootfs.mount: Deactivated successfully. Apr 24 23:41:23.141224 containerd[1458]: time="2026-04-24T23:41:23.141154697Z" level=info msg="shim disconnected" id=8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33 namespace=k8s.io Apr 24 23:41:23.141224 containerd[1458]: time="2026-04-24T23:41:23.141202938Z" level=warning msg="cleaning up after shim disconnected" id=8a5aafb40afebeea83f4ee36c0b2952019ac3599b207d852140da13e35700e33 namespace=k8s.io Apr 24 23:41:23.141224 containerd[1458]: time="2026-04-24T23:41:23.141209926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:41:24.064307 kubelet[2505]: E0424 23:41:24.064247 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:24.068429 containerd[1458]: time="2026-04-24T23:41:24.068395250Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 23:41:24.081726 containerd[1458]: time="2026-04-24T23:41:24.081694050Z" level=info msg="CreateContainer within sandbox \"94d598bf6d5d557e1ca8bb2b237b18ad83b2737fbb3720b5dc3422e5c2020ad8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea1da20fb1929065b64b3018d493b6a7d5e72377c15027f6fe5f6a3f77605aba\"" Apr 24 23:41:24.082234 containerd[1458]: time="2026-04-24T23:41:24.082179893Z" level=info msg="StartContainer for \"ea1da20fb1929065b64b3018d493b6a7d5e72377c15027f6fe5f6a3f77605aba\"" Apr 24 23:41:24.111621 systemd[1]: Started cri-containerd-ea1da20fb1929065b64b3018d493b6a7d5e72377c15027f6fe5f6a3f77605aba.scope - libcontainer container ea1da20fb1929065b64b3018d493b6a7d5e72377c15027f6fe5f6a3f77605aba. Apr 24 23:41:24.133519 containerd[1458]: time="2026-04-24T23:41:24.133432990Z" level=info msg="StartContainer for \"ea1da20fb1929065b64b3018d493b6a7d5e72377c15027f6fe5f6a3f77605aba\" returns successfully" Apr 24 23:41:24.358528 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 24 23:41:25.071282 kubelet[2505]: E0424 23:41:25.070238 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:25.086599 kubelet[2505]: I0424 23:41:25.086537 2505 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-vvnv2" podStartSLOduration=5.086525939 podStartE2EDuration="5.086525939s" podCreationTimestamp="2026-04-24 23:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:41:25.086373474 +0000 UTC m=+75.580584268" watchObservedRunningTime="2026-04-24 23:41:25.086525939 +0000 UTC m=+75.580736738" Apr 24 23:41:25.627014 kubelet[2505]: E0424 23:41:25.626886 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:26.627162 kubelet[2505]: E0424 23:41:26.627078 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:26.842784 kubelet[2505]: E0424 23:41:26.842595 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:27.011577 systemd[1]: run-containerd-runc-k8s.io-ea1da20fb1929065b64b3018d493b6a7d5e72377c15027f6fe5f6a3f77605aba-runc.jrS9Zk.mount: Deactivated successfully. Apr 24 23:41:27.080422 systemd-networkd[1379]: lxc_health: Link UP Apr 24 23:41:27.091622 systemd-networkd[1379]: lxc_health: Gained carrier Apr 24 23:41:28.784203 systemd-networkd[1379]: lxc_health: Gained IPv6LL Apr 24 23:41:28.843917 kubelet[2505]: E0424 23:41:28.843619 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:29.086220 kubelet[2505]: E0424 23:41:29.086034 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:30.088069 kubelet[2505]: E0424 23:41:30.088020 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:30.626029 kubelet[2505]: E0424 23:41:30.625987 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:32.627363 kubelet[2505]: E0424 23:41:32.627270 2505 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 23:41:33.819329 sshd[4330]: pam_unix(sshd:session): session closed for user core Apr 24 23:41:33.822012 systemd[1]: sshd@25-10.0.0.62:22-10.0.0.1:36184.service: Deactivated successfully. Apr 24 23:41:33.823342 systemd[1]: session-26.scope: Deactivated successfully. Apr 24 23:41:33.823868 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Apr 24 23:41:33.824581 systemd-logind[1438]: Removed session 26.