Apr 16 04:56:41.998519 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:39:17 -00 2026 Apr 16 04:56:41.998551 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 04:56:41.998568 kernel: BIOS-provided physical RAM map: Apr 16 04:56:41.998575 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 04:56:41.998581 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 04:56:41.998588 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 04:56:41.998597 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 04:56:41.998604 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 04:56:41.998611 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 04:56:41.998618 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 04:56:41.998625 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 04:56:41.998636 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 04:56:41.998643 kernel: NX (Execute Disable) protection: active Apr 16 04:56:41.998650 kernel: APIC: Static calls initialized Apr 16 04:56:41.998659 kernel: SMBIOS 2.8 present. Apr 16 04:56:41.998667 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 04:56:41.998676 kernel: DMI: Memory slots populated: 1/1 Apr 16 04:56:41.998684 kernel: Hypervisor detected: KVM Apr 16 04:56:41.998692 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:56:41.998700 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 04:56:41.998708 kernel: kvm-clock: using sched offset of 6493762542 cycles Apr 16 04:56:41.998717 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 04:56:41.998726 kernel: tsc: Detected 2793.438 MHz processor Apr 16 04:56:41.998732 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 04:56:41.998738 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 04:56:41.998743 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 04:56:41.998750 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 04:56:41.998755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 04:56:41.998760 kernel: Using GB pages for direct mapping Apr 16 04:56:41.998765 kernel: ACPI: Early table checksum verification disabled Apr 16 04:56:41.998770 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 04:56:41.998775 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998780 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998785 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998790 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 04:56:41.998797 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998802 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998807 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998812 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 04:56:41.998817 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 04:56:41.998824 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 04:56:41.998831 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 04:56:41.998836 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 04:56:41.998841 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 04:56:41.998847 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 04:56:41.998852 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 04:56:41.998857 kernel: No NUMA configuration found Apr 16 04:56:41.998862 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 04:56:41.998868 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 16 04:56:41.998874 kernel: Zone ranges: Apr 16 04:56:41.998879 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 04:56:41.998884 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 04:56:41.998889 kernel: Normal empty Apr 16 04:56:41.998895 kernel: Device empty Apr 16 04:56:41.998900 kernel: Movable zone start for each node Apr 16 04:56:41.998905 kernel: Early memory node ranges Apr 16 04:56:41.998910 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 04:56:41.998915 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 04:56:41.998920 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 04:56:41.998927 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 04:56:41.998932 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 04:56:41.998937 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 04:56:41.998942 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 04:56:41.998947 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 04:56:41.998952 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 04:56:41.998958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 04:56:41.998963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 04:56:41.998989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 04:56:41.998996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 04:56:41.999002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 04:56:41.999007 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 04:56:41.999012 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 04:56:41.999017 kernel: TSC deadline timer available Apr 16 04:56:41.999022 kernel: CPU topo: Max. logical packages: 1 Apr 16 04:56:41.999028 kernel: CPU topo: Max. logical dies: 1 Apr 16 04:56:41.999033 kernel: CPU topo: Max. dies per package: 1 Apr 16 04:56:41.999038 kernel: CPU topo: Max. threads per core: 1 Apr 16 04:56:41.999044 kernel: CPU topo: Num. cores per package: 4 Apr 16 04:56:41.999049 kernel: CPU topo: Num. threads per package: 4 Apr 16 04:56:41.999055 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 16 04:56:41.999060 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 04:56:41.999065 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 04:56:41.999070 kernel: kvm-guest: setup PV sched yield Apr 16 04:56:41.999075 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 04:56:41.999080 kernel: Booting paravirtualized kernel on KVM Apr 16 04:56:41.999086 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 04:56:41.999091 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 04:56:41.999098 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 16 04:56:41.999103 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 16 04:56:41.999291 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 04:56:41.999299 kernel: kvm-guest: PV spinlocks enabled Apr 16 04:56:41.999304 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 04:56:41.999310 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 04:56:41.999316 kernel: random: crng init done Apr 16 04:56:41.999321 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 04:56:41.999333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 04:56:41.999338 kernel: Fallback order for Node 0: 0 Apr 16 04:56:41.999345 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 16 04:56:41.999354 kernel: Policy zone: DMA32 Apr 16 04:56:41.999362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 04:56:41.999370 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 04:56:41.999377 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 04:56:41.999386 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 04:56:41.999393 kernel: Dynamic Preempt: voluntary Apr 16 04:56:41.999404 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 04:56:41.999414 kernel: rcu: RCU event tracing is enabled. Apr 16 04:56:41.999422 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 04:56:41.999431 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 04:56:41.999440 kernel: Rude variant of Tasks RCU enabled. Apr 16 04:56:41.999448 kernel: Tracing variant of Tasks RCU enabled. Apr 16 04:56:41.999457 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 04:56:41.999472 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 04:56:41.999477 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:56:41.999484 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:56:41.999490 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 04:56:41.999495 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 04:56:41.999500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 04:56:41.999506 kernel: Console: colour VGA+ 80x25 Apr 16 04:56:41.999516 kernel: printk: legacy console [ttyS0] enabled Apr 16 04:56:41.999523 kernel: ACPI: Core revision 20240827 Apr 16 04:56:41.999529 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 04:56:41.999536 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 04:56:41.999545 kernel: x2apic enabled Apr 16 04:56:41.999554 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 04:56:41.999561 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 04:56:41.999576 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 04:56:41.999584 kernel: kvm-guest: setup PV IPIs Apr 16 04:56:41.999592 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 04:56:41.999602 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:56:41.999610 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 04:56:41.999623 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 04:56:41.999632 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 04:56:41.999642 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 04:56:41.999652 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 04:56:41.999660 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 04:56:41.999670 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 04:56:41.999678 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 04:56:41.999688 kernel: RETBleed: Vulnerable Apr 16 04:56:41.999699 kernel: Speculative Store Bypass: Vulnerable Apr 16 04:56:41.999709 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 04:56:41.999719 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 04:56:41.999728 kernel: active return thunk: its_return_thunk Apr 16 04:56:41.999738 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 04:56:41.999751 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 04:56:41.999757 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 04:56:41.999763 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 04:56:41.999768 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 04:56:41.999776 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 04:56:41.999781 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 04:56:41.999787 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 04:56:41.999793 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 04:56:41.999799 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 04:56:41.999805 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 04:56:41.999810 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 04:56:41.999816 kernel: Freeing SMP alternatives memory: 32K Apr 16 04:56:41.999822 kernel: pid_max: default: 32768 minimum: 301 Apr 16 04:56:41.999829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 04:56:41.999835 kernel: landlock: Up and running. Apr 16 04:56:41.999840 kernel: SELinux: Initializing. Apr 16 04:56:41.999846 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:56:41.999852 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 04:56:41.999858 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 04:56:41.999863 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 04:56:41.999869 kernel: signal: max sigframe size: 3632 Apr 16 04:56:41.999875 kernel: rcu: Hierarchical SRCU implementation. Apr 16 04:56:41.999882 kernel: rcu: Max phase no-delay instances is 400. Apr 16 04:56:41.999888 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 04:56:41.999894 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 04:56:41.999899 kernel: smp: Bringing up secondary CPUs ... Apr 16 04:56:41.999905 kernel: smpboot: x86: Booting SMP configuration: Apr 16 04:56:41.999911 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 04:56:41.999916 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 04:56:41.999922 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 04:56:41.999928 kernel: Memory: 2419748K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 146112K reserved, 0K cma-reserved) Apr 16 04:56:41.999936 kernel: devtmpfs: initialized Apr 16 04:56:41.999941 kernel: x86/mm: Memory block size: 128MB Apr 16 04:56:41.999947 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 04:56:41.999953 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 04:56:41.999959 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 04:56:41.999964 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 04:56:41.999990 kernel: audit: initializing netlink subsys (disabled) Apr 16 04:56:41.999995 kernel: audit: type=2000 audit(1776315397.733:1): state=initialized audit_enabled=0 res=1 Apr 16 04:56:42.000001 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 04:56:42.000009 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 04:56:42.000014 kernel: cpuidle: using governor menu Apr 16 04:56:42.000020 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 04:56:42.000026 kernel: dca service started, version 1.12.1 Apr 16 04:56:42.000032 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 16 04:56:42.000038 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 04:56:42.000043 kernel: PCI: Using configuration type 1 for base access Apr 16 04:56:42.000049 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 04:56:42.000055 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 04:56:42.000062 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 04:56:42.000068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 04:56:42.000074 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 04:56:42.000079 kernel: ACPI: Added _OSI(Module Device) Apr 16 04:56:42.000085 kernel: ACPI: Added _OSI(Processor Device) Apr 16 04:56:42.000091 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 04:56:42.000096 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 04:56:42.000102 kernel: ACPI: Interpreter enabled Apr 16 04:56:42.000134 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 04:56:42.000142 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 04:56:42.000148 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 04:56:42.000154 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 04:56:42.000160 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 04:56:42.000165 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 04:56:42.000314 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 04:56:42.000407 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 04:56:42.000489 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 04:56:42.000510 kernel: PCI host bridge to bus 0000:00 Apr 16 04:56:42.000571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 04:56:42.000619 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 04:56:42.000666 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 04:56:42.000711 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 04:56:42.000757 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 04:56:42.000803 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 04:56:42.000851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 04:56:42.000940 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 04:56:42.001023 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 16 04:56:42.001077 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 16 04:56:42.001157 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 16 04:56:42.001210 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 16 04:56:42.001264 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 04:56:42.001323 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 04:56:42.001378 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 16 04:56:42.001431 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 16 04:56:42.001663 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 04:56:42.001762 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 16 04:56:42.001819 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 16 04:56:42.001875 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 16 04:56:42.001928 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 04:56:42.002011 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 04:56:42.002068 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 16 04:56:42.002370 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 16 04:56:42.002430 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 04:56:42.002483 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 16 04:56:42.002552 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 04:56:42.002605 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 04:56:42.002661 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 04:56:42.002713 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 16 04:56:42.002764 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 16 04:56:42.002822 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 04:56:42.002876 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 16 04:56:42.002883 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 04:56:42.002889 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 04:56:42.002895 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 04:56:42.002901 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 04:56:42.002906 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 04:56:42.002912 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 04:56:42.002918 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 04:56:42.002923 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 04:56:42.002933 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 04:56:42.002938 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 04:56:42.002944 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 04:56:42.002950 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 04:56:42.002955 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 04:56:42.002961 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 04:56:42.002995 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 04:56:42.003003 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 04:56:42.003009 kernel: iommu: Default domain type: Translated Apr 16 04:56:42.003019 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 04:56:42.003024 kernel: PCI: Using ACPI for IRQ routing Apr 16 04:56:42.003030 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 04:56:42.003035 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 04:56:42.003041 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 04:56:42.003378 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 04:56:42.003445 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 04:56:42.003497 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 04:56:42.003513 kernel: vgaarb: loaded Apr 16 04:56:42.003519 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 04:56:42.003525 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 04:56:42.003531 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 04:56:42.003537 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 04:56:42.003542 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 04:56:42.003548 kernel: pnp: PnP ACPI init Apr 16 04:56:42.003608 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 04:56:42.003618 kernel: pnp: PnP ACPI: found 6 devices Apr 16 04:56:42.003624 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 04:56:42.003630 kernel: NET: Registered PF_INET protocol family Apr 16 04:56:42.003635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 04:56:42.003641 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 04:56:42.003647 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 04:56:42.003653 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 04:56:42.003659 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 04:56:42.003665 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 04:56:42.003672 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:56:42.003678 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 04:56:42.003684 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 04:56:42.003690 kernel: NET: Registered PF_XDP protocol family Apr 16 04:56:42.003741 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 04:56:42.003789 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 04:56:42.004011 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 04:56:42.004066 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 04:56:42.004142 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 04:56:42.004197 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 04:56:42.004204 kernel: PCI: CLS 0 bytes, default 64 Apr 16 04:56:42.004211 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 04:56:42.004217 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 04:56:42.004223 kernel: Initialise system trusted keyrings Apr 16 04:56:42.004229 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 04:56:42.004235 kernel: Key type asymmetric registered Apr 16 04:56:42.004240 kernel: Asymmetric key parser 'x509' registered Apr 16 04:56:42.004248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 04:56:42.004254 kernel: io scheduler mq-deadline registered Apr 16 04:56:42.004261 kernel: io scheduler kyber registered Apr 16 04:56:42.004267 kernel: io scheduler bfq registered Apr 16 04:56:42.004273 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 04:56:42.004279 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 04:56:42.004285 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 04:56:42.004291 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 04:56:42.004297 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 04:56:42.004304 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 04:56:42.004310 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 04:56:42.004316 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 04:56:42.004321 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 04:56:42.004379 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 04:56:42.004387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 04:56:42.004435 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 04:56:42.004482 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T04:56:41 UTC (1776315401) Apr 16 04:56:42.004532 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 04:56:42.004539 kernel: intel_pstate: CPU model not supported Apr 16 04:56:42.004545 kernel: NET: Registered PF_INET6 protocol family Apr 16 04:56:42.004551 kernel: Segment Routing with IPv6 Apr 16 04:56:42.004556 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 04:56:42.004562 kernel: NET: Registered PF_PACKET protocol family Apr 16 04:56:42.004568 kernel: Key type dns_resolver registered Apr 16 04:56:42.004573 kernel: IPI shorthand broadcast: enabled Apr 16 04:56:42.004579 kernel: sched_clock: Marking stable (3307010712, 232070441)->(3635081894, -96000741) Apr 16 04:56:42.004587 kernel: registered taskstats version 1 Apr 16 04:56:42.004592 kernel: Loading compiled-in X.509 certificates Apr 16 04:56:42.004598 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 25c2b596b475a2918f2ba6f953b0a89c09a0d0ab' Apr 16 04:56:42.004604 kernel: Demotion targets for Node 0: null Apr 16 04:56:42.004610 kernel: Key type .fscrypt registered Apr 16 04:56:42.004615 kernel: Key type fscrypt-provisioning registered Apr 16 04:56:42.004621 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 04:56:42.004627 kernel: ima: Allocated hash algorithm: sha1 Apr 16 04:56:42.004632 kernel: ima: No architecture policies found Apr 16 04:56:42.004639 kernel: clk: Disabling unused clocks Apr 16 04:56:42.004645 kernel: Warning: unable to open an initial console. Apr 16 04:56:42.004651 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 16 04:56:42.004657 kernel: Write protecting the kernel read-only data: 40960k Apr 16 04:56:42.004663 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 04:56:42.004680 kernel: Run /init as init process Apr 16 04:56:42.004686 kernel: with arguments: Apr 16 04:56:42.004691 kernel: /init Apr 16 04:56:42.004697 kernel: with environment: Apr 16 04:56:42.004703 kernel: HOME=/ Apr 16 04:56:42.004723 kernel: TERM=linux Apr 16 04:56:42.004738 systemd[1]: Successfully made /usr/ read-only. Apr 16 04:56:42.004747 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 04:56:42.004761 systemd[1]: Detected virtualization kvm. Apr 16 04:56:42.004775 systemd[1]: Detected architecture x86-64. Apr 16 04:56:42.004795 systemd[1]: Running in initrd. Apr 16 04:56:42.004811 systemd[1]: No hostname configured, using default hostname. Apr 16 04:56:42.004824 systemd[1]: Hostname set to . Apr 16 04:56:42.004831 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:56:42.004844 systemd[1]: Queued start job for default target initrd.target. Apr 16 04:56:42.004851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:56:42.004857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:56:42.004864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 04:56:42.004872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:56:42.004878 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 04:56:42.004885 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 04:56:42.004892 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 04:56:42.004898 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 04:56:42.004904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:56:42.004911 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:56:42.004919 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:56:42.004925 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:56:42.004931 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:56:42.004937 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:56:42.004944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:56:42.004950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:56:42.004956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 04:56:42.004963 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 04:56:42.004985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:56:42.004992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:56:42.004999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:56:42.005006 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:56:42.005012 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 04:56:42.005019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:56:42.005027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 04:56:42.005034 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 04:56:42.005040 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 04:56:42.005046 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:56:42.005053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:56:42.005069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:56:42.005076 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 04:56:42.005140 systemd-journald[199]: Collecting audit messages is disabled. Apr 16 04:56:42.005162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:56:42.005168 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 04:56:42.005176 systemd-journald[199]: Journal started Apr 16 04:56:42.005191 systemd-journald[199]: Runtime Journal (/run/log/journal/79a5fd8d84ea4c71b3b17e455891667e) is 6M, max 48.2M, 42.2M free. Apr 16 04:56:42.005165 systemd-modules-load[202]: Inserted module 'overlay' Apr 16 04:56:42.009486 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:56:42.015591 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:56:42.022265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:56:42.030350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:56:42.038156 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 04:56:42.039529 systemd-modules-load[202]: Inserted module 'br_netfilter' Apr 16 04:56:42.153242 kernel: Bridge firewalling registered Apr 16 04:56:42.042453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:56:42.047286 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 04:56:42.153735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:56:42.156617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:56:42.157348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:56:42.161552 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 04:56:42.163928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:56:42.198662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:56:42.203194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:56:42.206685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:56:42.215059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:56:42.219525 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 04:56:42.242555 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 04:56:42.257736 systemd-resolved[236]: Positive Trust Anchors: Apr 16 04:56:42.258025 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:56:42.258091 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:56:42.265545 systemd-resolved[236]: Defaulting to hostname 'linux'. Apr 16 04:56:42.268654 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:56:42.280915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:56:42.469452 kernel: SCSI subsystem initialized Apr 16 04:56:42.488763 kernel: Loading iSCSI transport class v2.0-870. Apr 16 04:56:42.513321 kernel: iscsi: registered transport (tcp) Apr 16 04:56:42.577826 kernel: iscsi: registered transport (qla4xxx) Apr 16 04:56:42.578148 kernel: QLogic iSCSI HBA Driver Apr 16 04:56:42.637319 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:56:42.655212 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:56:42.658388 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:56:42.719670 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 04:56:42.722626 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 04:56:42.788308 kernel: raid6: avx512x4 gen() 42145 MB/s Apr 16 04:56:42.805404 kernel: raid6: avx512x2 gen() 45000 MB/s Apr 16 04:56:42.822289 kernel: raid6: avx512x1 gen() 43542 MB/s Apr 16 04:56:42.840290 kernel: raid6: avx2x4 gen() 34309 MB/s Apr 16 04:56:42.857288 kernel: raid6: avx2x2 gen() 33537 MB/s Apr 16 04:56:42.875384 kernel: raid6: avx2x1 gen() 25196 MB/s Apr 16 04:56:42.875481 kernel: raid6: using algorithm avx512x2 gen() 45000 MB/s Apr 16 04:56:42.893488 kernel: raid6: .... xor() 26890 MB/s, rmw enabled Apr 16 04:56:42.893580 kernel: raid6: using avx512x2 recovery algorithm Apr 16 04:56:42.928212 kernel: xor: automatically using best checksumming function avx Apr 16 04:56:43.322416 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 04:56:43.340260 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:56:43.347616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:56:43.387499 systemd-udevd[452]: Using default interface naming scheme 'v255'. Apr 16 04:56:43.391104 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:56:43.397901 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 04:56:43.447874 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Apr 16 04:56:43.575485 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:56:43.585337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:56:43.669812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:56:43.678016 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 04:56:43.746296 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 04:56:43.755490 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 04:56:43.774524 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 16 04:56:43.779287 kernel: libata version 3.00 loaded. Apr 16 04:56:43.788842 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 04:56:43.789092 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 04:56:43.796045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:56:43.796177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:56:43.805163 kernel: AES CTR mode by8 optimization enabled Apr 16 04:56:43.805205 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 04:56:43.805329 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 04:56:43.808854 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 04:56:43.811749 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 04:56:43.811949 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 04:56:43.813409 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:56:43.823251 kernel: GPT:9289727 != 19775487 Apr 16 04:56:43.823280 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 04:56:43.823292 kernel: GPT:9289727 != 19775487 Apr 16 04:56:43.823303 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 04:56:43.823315 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:56:43.825615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:56:43.832830 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 04:56:43.840256 kernel: scsi host0: ahci Apr 16 04:56:43.842436 kernel: scsi host1: ahci Apr 16 04:56:43.850220 kernel: scsi host2: ahci Apr 16 04:56:43.856226 kernel: scsi host3: ahci Apr 16 04:56:43.858061 kernel: scsi host4: ahci Apr 16 04:56:43.859150 kernel: scsi host5: ahci Apr 16 04:56:43.867497 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 16 04:56:43.867559 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 16 04:56:43.867572 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 16 04:56:43.869064 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 16 04:56:43.874804 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 16 04:56:43.878371 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 16 04:56:43.976422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 04:56:44.105646 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 04:56:44.111892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:56:44.171797 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 04:56:44.179072 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 04:56:44.199448 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 04:56:44.199478 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 04:56:44.205222 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 04:56:44.207331 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 04:56:44.207886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:56:44.223833 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 04:56:44.223856 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 04:56:44.223864 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 04:56:44.223871 kernel: ata3.00: applying bridge limits Apr 16 04:56:44.223880 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 04:56:44.223887 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 04:56:44.223894 kernel: ata3.00: configured for UDMA/100 Apr 16 04:56:44.224040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 04:56:44.230469 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 04:56:44.290048 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 04:56:44.290597 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 04:56:44.290609 disk-uuid[642]: Primary Header is updated. Apr 16 04:56:44.290609 disk-uuid[642]: Secondary Entries is updated. Apr 16 04:56:44.290609 disk-uuid[642]: Secondary Header is updated. Apr 16 04:56:44.353541 kernel: hrtimer: interrupt took 2553421 ns Apr 16 04:56:44.353570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:56:44.360354 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 04:56:44.799746 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 04:56:44.803685 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:56:44.808465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:56:44.812505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:56:44.818248 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 04:56:44.850703 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:56:45.431533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 04:56:45.434649 disk-uuid[643]: The operation has completed successfully. Apr 16 04:56:45.484293 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 04:56:45.484391 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 04:56:45.527369 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 04:56:45.565221 sh[672]: Success Apr 16 04:56:45.606248 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 04:56:45.606322 kernel: device-mapper: uevent: version 1.0.3 Apr 16 04:56:45.606333 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 04:56:45.621221 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 04:56:45.761269 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 04:56:45.774277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 04:56:45.807372 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 04:56:45.831531 kernel: BTRFS: device fsid 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (684) Apr 16 04:56:45.837524 kernel: BTRFS info (device dm-0): first mount of filesystem 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 Apr 16 04:56:45.837592 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:56:45.850354 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 04:56:45.850431 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 04:56:45.852033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 04:56:45.859629 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 04:56:45.867952 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 04:56:45.875446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 04:56:45.882566 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 04:56:45.932162 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Apr 16 04:56:45.935917 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:56:45.936010 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:56:45.944055 kernel: BTRFS info (device vda6): turning on async discard Apr 16 04:56:45.944104 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 04:56:45.956817 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:56:45.958961 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 04:56:45.971936 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 04:56:46.541642 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:56:46.550450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:56:46.691906 ignition[760]: Ignition 2.22.0 Apr 16 04:56:46.691921 ignition[760]: Stage: fetch-offline Apr 16 04:56:46.692698 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:56:46.692750 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:56:46.692893 ignition[760]: parsed url from cmdline: "" Apr 16 04:56:46.692895 ignition[760]: no config URL provided Apr 16 04:56:46.692899 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 04:56:46.692904 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 16 04:56:46.692923 ignition[760]: op(1): [started] loading QEMU firmware config module Apr 16 04:56:46.692926 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 04:56:46.724449 ignition[760]: op(1): [finished] loading QEMU firmware config module Apr 16 04:56:46.740173 systemd-networkd[858]: lo: Link UP Apr 16 04:56:46.740183 systemd-networkd[858]: lo: Gained carrier Apr 16 04:56:46.749337 systemd-networkd[858]: Enumeration completed Apr 16 04:56:46.750866 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:56:46.753083 systemd[1]: Reached target network.target - Network. Apr 16 04:56:46.753623 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:56:46.753625 systemd-networkd[858]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:56:46.756633 systemd-networkd[858]: eth0: Link UP Apr 16 04:56:46.756740 systemd-networkd[858]: eth0: Gained carrier Apr 16 04:56:46.756753 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:56:46.847101 systemd-networkd[858]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:56:47.004722 ignition[760]: parsing config with SHA512: 34795866ff4052325f6f6dae5881ed335d17e22cdb691afd816a1091cf4c1a00a7405c68a2e44061ef3e123e7e6359cb6c3aa992a585330927001c20205588fc Apr 16 04:56:47.021951 unknown[760]: fetched base config from "system" Apr 16 04:56:47.024158 unknown[760]: fetched user config from "qemu" Apr 16 04:56:47.025842 ignition[760]: fetch-offline: fetch-offline passed Apr 16 04:56:47.036791 ignition[760]: Ignition finished successfully Apr 16 04:56:47.057661 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:56:47.063687 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 04:56:47.068304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 04:56:47.261578 ignition[866]: Ignition 2.22.0 Apr 16 04:56:47.269503 ignition[866]: Stage: kargs Apr 16 04:56:47.273489 ignition[866]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:56:47.273528 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:56:47.294264 ignition[866]: kargs: kargs passed Apr 16 04:56:47.295282 ignition[866]: Ignition finished successfully Apr 16 04:56:47.298977 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 04:56:47.306864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 04:56:47.499033 ignition[874]: Ignition 2.22.0 Apr 16 04:56:47.499051 ignition[874]: Stage: disks Apr 16 04:56:47.499315 ignition[874]: no configs at "/usr/lib/ignition/base.d" Apr 16 04:56:47.499323 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:56:47.507597 ignition[874]: disks: disks passed Apr 16 04:56:47.507671 ignition[874]: Ignition finished successfully Apr 16 04:56:47.510554 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 04:56:47.512733 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 04:56:47.516668 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 04:56:47.519169 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:56:47.527968 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:56:47.531840 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:56:47.540484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 04:56:47.859913 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 04:56:47.875753 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 04:56:47.887865 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 04:56:48.200183 kernel: EXT4-fs (vda9): mounted filesystem 75cd5b5e-229f-474b-8de5-870bc4bccaf1 r/w with ordered data mode. Quota mode: none. Apr 16 04:56:48.200868 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 04:56:48.203493 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 04:56:48.206411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:56:48.209503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 04:56:48.211239 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 04:56:48.211272 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 04:56:48.211290 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:56:48.221168 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 04:56:48.226056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 04:56:48.261474 systemd-networkd[858]: eth0: Gained IPv6LL Apr 16 04:56:48.360557 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Apr 16 04:56:48.379335 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:56:48.379484 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:56:48.388770 kernel: BTRFS info (device vda6): turning on async discard Apr 16 04:56:48.388865 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 04:56:48.435240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:56:48.462973 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 04:56:48.470823 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Apr 16 04:56:48.475381 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 04:56:48.487492 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 04:56:48.812665 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 04:56:48.818649 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 04:56:48.826650 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 04:56:48.878430 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:56:48.881685 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 04:56:48.909712 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 04:56:49.074900 ignition[1006]: INFO : Ignition 2.22.0 Apr 16 04:56:49.074900 ignition[1006]: INFO : Stage: mount Apr 16 04:56:49.082462 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:56:49.082462 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:56:49.089951 ignition[1006]: INFO : mount: mount passed Apr 16 04:56:49.131237 ignition[1006]: INFO : Ignition finished successfully Apr 16 04:56:49.137408 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 04:56:49.143398 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 04:56:49.214157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 04:56:49.245334 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Apr 16 04:56:49.249767 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 04:56:49.249800 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 04:56:49.258777 kernel: BTRFS info (device vda6): turning on async discard Apr 16 04:56:49.258830 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 04:56:49.262876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 04:56:49.366727 ignition[1035]: INFO : Ignition 2.22.0 Apr 16 04:56:49.375527 ignition[1035]: INFO : Stage: files Apr 16 04:56:49.378837 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:56:49.378837 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:56:49.384973 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Apr 16 04:56:49.388502 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 04:56:49.388502 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 04:56:49.396688 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 04:56:49.396688 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 04:56:49.407034 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 04:56:49.402443 unknown[1035]: wrote ssh authorized keys file for user: core Apr 16 04:56:49.417199 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:56:49.422957 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 04:56:49.527716 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 04:56:49.633382 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 04:56:49.633382 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 04:56:49.641338 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 16 04:56:49.914254 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 04:56:51.034760 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 16 04:56:51.034760 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:56:51.044313 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 04:56:51.071849 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:56:51.071849 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 04:56:51.071849 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:56:51.071849 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:56:51.071849 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:56:51.071849 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 04:56:51.279999 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 04:56:53.653779 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 04:56:53.660657 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 16 04:56:53.667052 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 16 04:56:53.670540 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 04:56:53.730166 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:56:53.734287 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 04:56:53.738620 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 04:56:53.738620 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 16 04:56:53.746492 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 04:56:53.746492 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:56:53.753503 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 04:56:53.753503 ignition[1035]: INFO : files: files passed Apr 16 04:56:53.753503 ignition[1035]: INFO : Ignition finished successfully Apr 16 04:56:53.759427 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 04:56:53.770676 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 04:56:53.775853 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 04:56:53.790979 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 04:56:53.792897 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 04:56:53.817715 initrd-setup-root-after-ignition[1064]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 04:56:53.821133 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:56:53.821133 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:56:53.827196 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 04:56:53.831387 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:56:53.836143 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 04:56:53.838864 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 04:56:53.936076 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 04:56:53.936195 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 04:56:53.942918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 04:56:53.945626 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 04:56:53.946334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 04:56:53.950683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 04:56:54.090872 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:56:54.108015 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 04:56:54.238350 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:56:54.242843 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:56:54.248051 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 04:56:54.252578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 04:56:54.255688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 04:56:54.260652 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 04:56:54.264143 systemd[1]: Stopped target basic.target - Basic System. Apr 16 04:56:54.267859 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 04:56:54.272369 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 04:56:54.276250 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 04:56:54.280501 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 04:56:54.284218 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 04:56:54.288361 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 04:56:54.294795 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 04:56:54.300466 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 04:56:54.305574 systemd[1]: Stopped target swap.target - Swaps. Apr 16 04:56:54.308623 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 04:56:54.310672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 04:56:54.314440 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:56:54.318062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:56:54.321898 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 04:56:54.323592 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:56:54.328147 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 04:56:54.329884 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 04:56:54.334222 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 04:56:54.335740 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 04:56:54.340434 systemd[1]: Stopped target paths.target - Path Units. Apr 16 04:56:54.343559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 04:56:54.346887 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:56:54.353369 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 04:56:54.354783 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 04:56:54.357298 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 04:56:54.357381 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 04:56:54.358586 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 04:56:54.358651 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 04:56:54.363656 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 04:56:54.363771 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 04:56:54.364411 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 04:56:54.364508 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 04:56:54.372418 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 04:56:54.378369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 04:56:54.380555 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 04:56:54.381235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:56:54.385541 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 04:56:54.385677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 04:56:54.476681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 04:56:54.478423 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 04:56:54.502605 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 04:56:54.507851 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 04:56:54.507959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 04:56:54.515088 ignition[1090]: INFO : Ignition 2.22.0 Apr 16 04:56:54.515088 ignition[1090]: INFO : Stage: umount Apr 16 04:56:54.518104 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 04:56:54.518104 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 04:56:54.518104 ignition[1090]: INFO : umount: umount passed Apr 16 04:56:54.518104 ignition[1090]: INFO : Ignition finished successfully Apr 16 04:56:54.525162 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 04:56:54.525312 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 04:56:54.529409 systemd[1]: Stopped target network.target - Network. Apr 16 04:56:54.532495 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 04:56:54.532608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 04:56:54.538624 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 04:56:54.539024 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 04:56:54.540762 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 04:56:54.541432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 04:56:54.544989 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 04:56:54.545090 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 04:56:54.548992 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 04:56:54.549273 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 04:56:54.550000 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 04:56:54.558845 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 04:56:54.579005 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 04:56:54.579825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 04:56:54.593774 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 04:56:54.594723 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 04:56:54.594840 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 04:56:54.601916 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 04:56:54.604244 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 04:56:54.604839 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 04:56:54.604888 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:56:54.612986 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 04:56:54.613087 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 04:56:54.613182 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 04:56:54.619370 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:56:54.619422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:56:54.625590 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 04:56:54.625636 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 04:56:54.632270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 04:56:54.632322 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:56:54.639597 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:56:54.642200 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 04:56:54.642270 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 04:56:54.656702 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 04:56:54.659733 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:56:54.662211 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 04:56:54.662324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 04:56:54.669581 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 04:56:54.669638 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 04:56:54.671864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 04:56:54.671976 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:56:54.676643 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 04:56:54.676719 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 04:56:54.684749 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 04:56:54.684848 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 04:56:54.690934 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 04:56:54.691802 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 04:56:54.699903 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 04:56:54.705156 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 04:56:54.705324 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:56:54.714146 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 04:56:54.714260 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:56:54.720853 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 04:56:54.720980 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:56:54.728134 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 04:56:54.728213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:56:54.731700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 04:56:54.731792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:56:54.744015 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 04:56:54.744463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 16 04:56:54.744487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 04:56:54.744522 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 04:56:54.745245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 04:56:54.745349 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 04:56:54.754453 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 04:56:54.761764 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 04:56:54.801167 systemd[1]: Switching root. Apr 16 04:56:54.831447 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Apr 16 04:56:54.831525 systemd-journald[199]: Journal stopped Apr 16 04:56:57.790769 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 04:56:57.790878 kernel: SELinux: policy capability open_perms=1 Apr 16 04:56:57.790894 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 04:56:57.790906 kernel: SELinux: policy capability always_check_network=0 Apr 16 04:56:57.790922 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 04:56:57.790939 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 04:56:57.790952 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 04:56:57.790968 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 04:56:57.790982 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 04:56:57.790998 kernel: audit: type=1403 audit(1776315415.004:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 04:56:57.791013 systemd[1]: Successfully loaded SELinux policy in 89.840ms. Apr 16 04:56:57.791033 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 88.411ms. Apr 16 04:56:57.791072 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 04:56:57.791098 systemd[1]: Detected virtualization kvm. Apr 16 04:56:57.791139 systemd[1]: Detected architecture x86-64. Apr 16 04:56:57.791153 systemd[1]: Detected first boot. Apr 16 04:56:57.791166 systemd[1]: Initializing machine ID from VM UUID. Apr 16 04:56:57.791179 zram_generator::config[1135]: No configuration found. Apr 16 04:56:57.791196 kernel: Guest personality initialized and is inactive Apr 16 04:56:57.791209 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 04:56:57.791233 kernel: Initialized host personality Apr 16 04:56:57.791249 kernel: NET: Registered PF_VSOCK protocol family Apr 16 04:56:57.791273 systemd[1]: Populated /etc with preset unit settings. Apr 16 04:56:57.791289 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 04:56:57.791302 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 04:56:57.791315 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 04:56:57.791332 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 04:56:57.791341 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 04:56:57.791349 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 04:56:57.791356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 04:56:57.791364 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 04:56:57.791372 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 04:56:57.791380 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 04:56:57.791388 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 04:56:57.791397 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 04:56:57.791405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 04:56:57.791417 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 04:56:57.791430 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 04:56:57.791443 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 04:56:57.791459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 04:56:57.791473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 04:56:57.791486 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 04:56:57.791501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 04:56:57.791513 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 04:56:57.791526 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 04:56:57.791542 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 04:56:57.791555 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 04:56:57.791567 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 04:56:57.791575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 04:56:57.791586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 04:56:57.791594 systemd[1]: Reached target slices.target - Slice Units. Apr 16 04:56:57.791603 systemd[1]: Reached target swap.target - Swaps. Apr 16 04:56:57.791612 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 04:56:57.791625 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 04:56:57.791639 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 04:56:57.791651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 04:56:57.791665 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 04:56:57.791678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 04:56:57.791691 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 04:56:57.791704 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 04:56:57.791718 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 04:56:57.791731 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 04:56:57.791744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:56:57.791757 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 04:56:57.791772 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 04:56:57.791787 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 04:56:57.791801 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 04:56:57.791814 systemd[1]: Reached target machines.target - Containers. Apr 16 04:56:57.791831 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 04:56:57.791847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:56:57.791860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 04:56:57.791875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 04:56:57.791890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:56:57.791903 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:56:57.791916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:56:57.791928 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 04:56:57.791942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:56:57.791958 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 04:56:57.791972 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 04:56:57.791985 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 04:56:57.792001 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 04:56:57.792016 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 04:56:57.792029 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:56:57.792066 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 04:56:57.792079 kernel: fuse: init (API version 7.41) Apr 16 04:56:57.792096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 04:56:57.792152 kernel: loop: module loaded Apr 16 04:56:57.792165 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 04:56:57.792177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 04:56:57.792285 systemd-journald[1206]: Collecting audit messages is disabled. Apr 16 04:56:57.792319 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 04:56:57.792334 kernel: ACPI: bus type drm_connector registered Apr 16 04:56:57.792349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 04:56:57.792366 systemd-journald[1206]: Journal started Apr 16 04:56:57.792395 systemd-journald[1206]: Runtime Journal (/run/log/journal/79a5fd8d84ea4c71b3b17e455891667e) is 6M, max 48.2M, 42.2M free. Apr 16 04:56:57.036002 systemd[1]: Queued start job for default target multi-user.target. Apr 16 04:56:57.057293 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 04:56:57.059027 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 04:56:57.799996 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 04:56:57.801896 systemd[1]: Stopped verity-setup.service. Apr 16 04:56:57.801914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:56:57.815183 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 04:56:57.817545 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 04:56:57.820321 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 04:56:57.823077 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 04:56:57.825104 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 04:56:57.828359 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 04:56:57.833168 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 04:56:57.836807 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 04:56:57.838852 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 04:56:57.842600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 04:56:57.842847 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 04:56:57.848400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:56:57.848653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:56:57.851386 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:56:57.851616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:56:57.862274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:56:57.864383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:56:57.867281 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 04:56:57.867487 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 04:56:57.874557 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:56:57.874776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:56:57.881665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 04:56:57.891417 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 04:56:57.966159 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 04:56:57.972220 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 04:56:58.008940 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 04:56:58.015467 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 04:56:58.019201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 04:56:58.021207 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 04:56:58.021243 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 04:56:58.029262 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 04:56:58.036900 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 04:56:58.039208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:56:58.053639 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 04:56:58.070554 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 04:56:58.077960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:56:58.088829 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 04:56:58.094636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:56:58.102159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:56:58.104891 systemd-journald[1206]: Time spent on flushing to /var/log/journal/79a5fd8d84ea4c71b3b17e455891667e is 55.770ms for 988 entries. Apr 16 04:56:58.104891 systemd-journald[1206]: System Journal (/var/log/journal/79a5fd8d84ea4c71b3b17e455891667e) is 8M, max 195.6M, 187.6M free. Apr 16 04:56:58.497686 systemd-journald[1206]: Received client request to flush runtime journal. Apr 16 04:56:58.497750 kernel: loop0: detected capacity change from 0 to 219192 Apr 16 04:56:58.497777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 04:56:58.108102 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 04:56:58.114350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 04:56:58.141168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 04:56:58.399668 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 04:56:58.402828 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 04:56:58.410296 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 04:56:58.431807 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 04:56:58.450522 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 04:56:58.486548 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:56:58.501313 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Apr 16 04:56:58.501321 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Apr 16 04:56:58.506478 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 04:56:58.510363 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 04:56:58.528223 kernel: loop1: detected capacity change from 0 to 128560 Apr 16 04:56:58.528290 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 04:56:58.531778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 04:56:58.534361 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 04:56:58.655759 kernel: loop2: detected capacity change from 0 to 110984 Apr 16 04:56:58.727089 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 04:56:58.735363 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 04:56:58.751378 kernel: loop3: detected capacity change from 0 to 219192 Apr 16 04:56:58.809536 kernel: loop4: detected capacity change from 0 to 128560 Apr 16 04:56:58.862096 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 16 04:56:58.864510 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Apr 16 04:56:58.945091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 04:56:58.961537 kernel: loop5: detected capacity change from 0 to 110984 Apr 16 04:56:59.044264 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 04:56:59.053618 (sd-merge)[1280]: Merged extensions into '/usr'. Apr 16 04:56:59.090844 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 04:56:59.090878 systemd[1]: Reloading... Apr 16 04:56:59.683198 zram_generator::config[1305]: No configuration found. Apr 16 04:56:59.853677 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 04:57:01.318856 systemd[1]: Reloading finished in 2224 ms. Apr 16 04:57:01.365559 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 04:57:01.375033 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 04:57:01.417513 systemd[1]: Starting ensure-sysext.service... Apr 16 04:57:01.429883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 04:57:01.490582 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Apr 16 04:57:01.491330 systemd[1]: Reloading... Apr 16 04:57:01.543879 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 04:57:01.543926 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 04:57:01.547620 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:57:01.547899 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:57:01.556613 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:57:01.558771 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Apr 16 04:57:01.558833 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Apr 16 04:57:01.564843 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:57:01.565637 systemd-tmpfiles[1347]: Skipping /boot Apr 16 04:57:01.594816 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:57:01.594840 systemd-tmpfiles[1347]: Skipping /boot Apr 16 04:57:01.654450 zram_generator::config[1373]: No configuration found. Apr 16 04:57:02.956151 systemd[1]: Reloading finished in 1435 ms. Apr 16 04:57:03.023921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 04:57:03.065614 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 04:57:03.072042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 04:57:03.096526 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 04:57:03.113234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 04:57:03.120747 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 04:57:03.134500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:57:03.135131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:57:03.144174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:57:03.178320 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:57:03.182805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:57:03.202919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:57:03.204905 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:57:03.355184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:57:03.375382 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 04:57:03.384346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:57:03.384615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:57:03.399327 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 04:57:03.409787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:57:03.413549 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:57:03.417533 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:57:03.418603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:57:03.434302 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 04:57:03.545605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:57:03.560154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:57:03.601591 augenrules[1445]: No rules Apr 16 04:57:03.600684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:57:03.620153 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:57:03.649791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:57:03.654742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:57:03.677041 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:57:03.683469 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:57:03.686557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:57:03.695721 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 04:57:03.699291 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:57:03.700781 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 04:57:03.713683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:57:03.717368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:57:03.727340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:57:03.727604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:57:03.871305 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:57:03.871528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:57:03.900052 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 04:57:03.977611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:57:04.062914 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 04:57:04.067233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 04:57:04.105025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 04:57:04.113684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 04:57:04.121657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 04:57:04.125729 systemd-resolved[1415]: Positive Trust Anchors: Apr 16 04:57:04.125756 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 04:57:04.125792 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 04:57:04.135261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 04:57:04.140280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 04:57:04.140475 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 04:57:04.140597 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 04:57:04.151813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 04:57:04.181950 systemd-resolved[1415]: Defaulting to hostname 'linux'. Apr 16 04:57:04.185462 systemd[1]: Finished ensure-sysext.service. Apr 16 04:57:04.190049 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 04:57:04.205915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 04:57:04.213481 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 04:57:04.218422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 04:57:04.218596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 04:57:04.245028 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 04:57:04.245849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 04:57:04.266449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 04:57:04.266654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 04:57:04.271062 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 04:57:04.332957 augenrules[1462]: /sbin/augenrules: No change Apr 16 04:57:04.281905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 04:57:04.344656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 04:57:04.344746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 04:57:04.383226 augenrules[1490]: No rules Apr 16 04:57:04.398513 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:57:04.411766 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 04:57:04.701452 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 04:57:04.704145 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 04:57:04.743804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 04:57:04.757561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 04:57:04.767092 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 04:57:04.899956 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 04:57:04.945491 systemd-udevd[1498]: Using default interface naming scheme 'v255'. Apr 16 04:57:05.077838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 04:57:05.082517 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 04:57:05.151551 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 04:57:05.157237 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 04:57:05.163779 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 04:57:05.173436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 04:57:05.179026 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 04:57:05.182851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 04:57:05.190710 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 04:57:05.193010 systemd[1]: Reached target paths.target - Path Units. Apr 16 04:57:05.197770 systemd[1]: Reached target timers.target - Timer Units. Apr 16 04:57:05.208845 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 04:57:05.212777 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 04:57:05.234802 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 04:57:05.239373 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 04:57:05.244883 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 04:57:05.264870 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 04:57:05.271500 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 04:57:05.281945 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 04:57:05.290157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 04:57:05.308905 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 04:57:05.311895 systemd[1]: Reached target basic.target - Basic System. Apr 16 04:57:05.315339 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:57:05.315374 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 04:57:05.316726 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 04:57:05.327791 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 04:57:05.336621 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 04:57:05.342671 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 04:57:05.347425 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 04:57:05.349436 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 04:57:05.374492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 04:57:05.383722 jq[1534]: false Apr 16 04:57:05.383302 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 04:57:05.468745 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 04:57:05.485208 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 04:57:05.489592 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Refreshing passwd entry cache Apr 16 04:57:05.489216 oslogin_cache_refresh[1536]: Refreshing passwd entry cache Apr 16 04:57:05.507949 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 04:57:05.510543 oslogin_cache_refresh[1536]: Failure getting users, quitting Apr 16 04:57:05.525489 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Failure getting users, quitting Apr 16 04:57:05.525489 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 04:57:05.525489 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Refreshing group entry cache Apr 16 04:57:05.525489 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Failure getting groups, quitting Apr 16 04:57:05.525489 google_oslogin_nss_cache[1536]: oslogin_cache_refresh[1536]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 04:57:05.514955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 04:57:05.510606 oslogin_cache_refresh[1536]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 04:57:05.516020 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 04:57:05.510654 oslogin_cache_refresh[1536]: Refreshing group entry cache Apr 16 04:57:05.519189 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 04:57:05.572935 extend-filesystems[1535]: Found /dev/vda6 Apr 16 04:57:05.511761 oslogin_cache_refresh[1536]: Failure getting groups, quitting Apr 16 04:57:05.533067 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 04:57:05.589141 extend-filesystems[1535]: Found /dev/vda9 Apr 16 04:57:05.511770 oslogin_cache_refresh[1536]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 04:57:05.577990 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 04:57:05.589675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 04:57:05.589951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 04:57:05.620013 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 04:57:05.625783 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 04:57:05.690410 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 04:57:05.714018 systemd-networkd[1530]: lo: Link UP Apr 16 04:57:05.714026 systemd-networkd[1530]: lo: Gained carrier Apr 16 04:57:05.717475 systemd-networkd[1530]: Enumeration completed Apr 16 04:57:05.719011 extend-filesystems[1535]: Checking size of /dev/vda9 Apr 16 04:57:05.721023 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 04:57:05.728128 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 04:57:05.735051 jq[1546]: true Apr 16 04:57:05.799251 systemd[1]: Reached target network.target - Network. Apr 16 04:57:05.816148 jq[1558]: true Apr 16 04:57:05.813590 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 04:57:05.821058 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 04:57:05.841862 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 04:57:05.846264 extend-filesystems[1535]: Resized partition /dev/vda9 Apr 16 04:57:05.867013 extend-filesystems[1576]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 04:57:05.952824 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 04:57:05.953469 update_engine[1545]: I20260416 04:57:05.860847 1545 main.cc:92] Flatcar Update Engine starting Apr 16 04:57:05.860517 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 04:57:05.860805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 04:57:05.982974 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 04:57:05.996553 tar[1556]: linux-amd64/LICENSE Apr 16 04:57:05.996553 tar[1556]: linux-amd64/helm Apr 16 04:57:06.012489 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 04:57:06.034673 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 04:57:06.034673 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 04:57:06.034673 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 04:57:06.048797 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Apr 16 04:57:06.034896 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 04:57:06.039374 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 04:57:06.062378 systemd-logind[1544]: New seat seat0. Apr 16 04:57:06.065400 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 04:57:06.096984 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 04:57:06.103174 dbus-daemon[1531]: [system] SELinux support is enabled Apr 16 04:57:06.103962 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 04:57:06.108387 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 04:57:06.108710 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 04:57:06.108735 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 04:57:06.111320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 04:57:06.111350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 04:57:06.128009 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 04:57:06.130176 update_engine[1545]: I20260416 04:57:06.130134 1545 update_check_scheduler.cc:74] Next update check in 9m44s Apr 16 04:57:06.618716 systemd[1]: Started update-engine.service - Update Engine. Apr 16 04:57:06.646848 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 04:57:06.822047 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Apr 16 04:57:06.829715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 04:57:06.847248 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 04:57:07.120270 systemd-networkd[1530]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:57:07.120283 systemd-networkd[1530]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 04:57:07.128983 systemd-networkd[1530]: eth0: Link UP Apr 16 04:57:07.129222 systemd-networkd[1530]: eth0: Gained carrier Apr 16 04:57:07.129245 systemd-networkd[1530]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 04:57:07.246219 systemd-networkd[1530]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 04:57:07.297827 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Apr 16 04:57:07.309452 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 04:57:07.309586 systemd-timesyncd[1475]: Initial clock synchronization to Thu 2026-04-16 04:57:07.044886 UTC. Apr 16 04:57:07.438517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 04:57:07.505968 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 04:57:07.641629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 04:57:07.906355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 04:57:07.963763 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 04:57:07.990776 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 04:57:08.014312 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 04:57:08.353325 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 04:57:08.070286 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 04:57:08.318893 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 04:57:08.372867 kernel: ACPI: button: Power Button [PWRF] Apr 16 04:57:08.334192 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:56284.service - OpenSSH per-connection server daemon (10.0.0.1:56284). Apr 16 04:57:08.569497 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 04:57:08.628779 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 04:57:08.567722 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 04:57:08.567875 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 04:57:08.624602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 04:57:09.014233 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 04:57:09.058858 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 04:57:09.065357 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 04:57:09.077336 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 04:57:09.275922 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 56284 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:09.276638 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:09.392568 systemd-networkd[1530]: eth0: Gained IPv6LL Apr 16 04:57:09.428928 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 04:57:09.432423 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 04:57:09.569810 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 04:57:09.613381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:57:09.672987 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 04:57:10.004865 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 04:57:10.065277 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 04:57:10.079710 containerd[1586]: time="2026-04-16T04:57:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 04:57:10.079710 containerd[1586]: time="2026-04-16T04:57:10.075059435Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 04:57:10.068754 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 04:57:10.070139 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 04:57:10.088939 systemd-logind[1544]: New session 1 of user core. Apr 16 04:57:10.106901 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 04:57:10.179239 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 04:57:10.393264 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 04:57:10.450386 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 04:57:10.635479 containerd[1586]: time="2026-04-16T04:57:10.612959915Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=2.914367ms Apr 16 04:57:10.635479 containerd[1586]: time="2026-04-16T04:57:10.633965760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 04:57:10.668297 containerd[1586]: time="2026-04-16T04:57:10.638638232Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 04:57:10.668297 containerd[1586]: time="2026-04-16T04:57:10.652376354Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 04:57:10.668297 containerd[1586]: time="2026-04-16T04:57:10.664186138Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 04:57:10.648960 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 04:57:10.679884 containerd[1586]: time="2026-04-16T04:57:10.675135585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 04:57:10.679884 containerd[1586]: time="2026-04-16T04:57:10.675566828Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 04:57:10.679884 containerd[1586]: time="2026-04-16T04:57:10.675586166Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 04:57:10.687543 containerd[1586]: time="2026-04-16T04:57:10.685644918Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 04:57:10.687543 containerd[1586]: time="2026-04-16T04:57:10.685791770Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 04:57:10.687543 containerd[1586]: time="2026-04-16T04:57:10.685862814Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 04:57:10.687543 containerd[1586]: time="2026-04-16T04:57:10.685870157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 04:57:10.866632 containerd[1586]: time="2026-04-16T04:57:10.858267201Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 04:57:10.905267 containerd[1586]: time="2026-04-16T04:57:10.904880637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 04:57:10.905824 containerd[1586]: time="2026-04-16T04:57:10.905799188Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 04:57:10.905924 containerd[1586]: time="2026-04-16T04:57:10.905911620Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 04:57:10.909128 systemd-logind[1544]: New session c1 of user core. Apr 16 04:57:10.910888 containerd[1586]: time="2026-04-16T04:57:10.909319400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 04:57:11.348570 containerd[1586]: time="2026-04-16T04:57:11.336792736Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 04:57:11.388236 containerd[1586]: time="2026-04-16T04:57:11.357732920Z" level=info msg="metadata content store policy set" policy=shared Apr 16 04:57:11.619979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 04:57:11.806003 containerd[1586]: time="2026-04-16T04:57:11.800947675Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 04:57:11.806003 containerd[1586]: time="2026-04-16T04:57:11.801480145Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809084314Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809485643Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809515300Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809538412Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809615237Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809625611Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809643172Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809652350Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809660326Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809682098Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.809980263Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.810003457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.810015757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 04:57:12.094635 containerd[1586]: time="2026-04-16T04:57:11.810052697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 04:57:12.095082 containerd[1586]: time="2026-04-16T04:57:11.810060731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.093960661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.110613098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.110820171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.110834247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.110843704Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.110857055Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.111062367Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 04:57:12.110937 containerd[1586]: time="2026-04-16T04:57:12.111074524Z" level=info msg="Start snapshots syncer" Apr 16 04:57:12.117356 containerd[1586]: time="2026-04-16T04:57:12.117136015Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 04:57:12.117988 containerd[1586]: time="2026-04-16T04:57:12.117894448Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 04:57:12.122693 containerd[1586]: time="2026-04-16T04:57:12.118028357Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 04:57:12.229929 containerd[1586]: time="2026-04-16T04:57:12.175452573Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 04:57:12.259756 containerd[1586]: time="2026-04-16T04:57:12.176145029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 04:57:12.357693 containerd[1586]: time="2026-04-16T04:57:12.273022256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 04:57:12.371577 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 04:57:12.666141 containerd[1586]: time="2026-04-16T04:57:12.456084231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 04:57:12.665597 systemd[1681]: Queued start job for default target default.target. Apr 16 04:57:12.686836 containerd[1586]: time="2026-04-16T04:57:12.675985045Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 04:57:12.692532 containerd[1586]: time="2026-04-16T04:57:12.688011784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 04:57:12.692532 containerd[1586]: time="2026-04-16T04:57:12.688225317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 04:57:12.692532 containerd[1586]: time="2026-04-16T04:57:12.688246103Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 04:57:12.766041 systemd[1681]: Created slice app.slice - User Application Slice. Apr 16 04:57:12.772295 systemd[1681]: Reached target paths.target - Paths. Apr 16 04:57:12.965009 containerd[1586]: time="2026-04-16T04:57:12.745432371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 04:57:12.772373 systemd[1681]: Reached target timers.target - Timers. Apr 16 04:57:12.844907 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 04:57:13.152442 containerd[1586]: time="2026-04-16T04:57:13.150640919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 04:57:13.152442 containerd[1586]: time="2026-04-16T04:57:13.151260875Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181360755Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181521791Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181534075Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181544937Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181554340Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181567492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181650574Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181880741Z" level=info msg="runtime interface created" Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181892169Z" level=info msg="created NRI interface" Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.181928399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 04:57:13.183760 containerd[1586]: time="2026-04-16T04:57:13.182044875Z" level=info msg="Connect containerd service" Apr 16 04:57:13.309192 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 04:57:13.371877 systemd[1681]: Reached target sockets.target - Sockets. Apr 16 04:57:13.372199 systemd[1681]: Reached target basic.target - Basic System. Apr 16 04:57:13.372278 systemd[1681]: Reached target default.target - Main User Target. Apr 16 04:57:13.372310 systemd[1681]: Startup finished in 2.098s. Apr 16 04:57:13.379500 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 04:57:13.437052 containerd[1586]: time="2026-04-16T04:57:13.375088558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 04:57:13.730855 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 04:57:13.806794 systemd-logind[1544]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 04:57:13.845386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 04:57:14.524658 containerd[1586]: time="2026-04-16T04:57:14.322176756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:57:15.303130 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:56286.service - OpenSSH per-connection server daemon (10.0.0.1:56286). Apr 16 04:57:15.606784 tar[1556]: linux-amd64/README.md Apr 16 04:57:15.892780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 04:57:16.290437 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 56286 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:16.311017 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:16.575902 containerd[1586]: time="2026-04-16T04:57:16.541791583Z" level=info msg="Start subscribing containerd event" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.582617797Z" level=info msg="Start recovering state" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583141892Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583227698Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583273454Z" level=info msg="Start event monitor" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583378095Z" level=info msg="Start cni network conf syncer for default" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583406974Z" level=info msg="Start streaming server" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583427795Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583455724Z" level=info msg="runtime interface starting up..." Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583471689Z" level=info msg="starting plugins..." Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583516755Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 04:57:16.584044 containerd[1586]: time="2026-04-16T04:57:16.583651095Z" level=info msg="containerd successfully booted in 6.579153s" Apr 16 04:57:16.713498 systemd-logind[1544]: New session 2 of user core. Apr 16 04:57:16.848574 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 04:57:17.225398 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 04:57:17.608563 sshd[1723]: Connection closed by 10.0.0.1 port 56286 Apr 16 04:57:17.613535 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:17.755640 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:56286.service: Deactivated successfully. Apr 16 04:57:17.757921 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 04:57:17.819574 systemd-logind[1544]: Session 2 logged out. Waiting for processes to exit. Apr 16 04:57:17.945904 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:60678.service - OpenSSH per-connection server daemon (10.0.0.1:60678). Apr 16 04:57:18.014760 systemd-logind[1544]: Removed session 2. Apr 16 04:57:19.177544 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 60678 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:19.179577 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:19.260957 systemd-logind[1544]: New session 3 of user core. Apr 16 04:57:19.416084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:57:19.463837 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 04:57:19.542696 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 04:57:19.549040 systemd[1]: Startup finished in 3.403s (kernel) + 13.306s (initrd) + 24.623s (userspace) = 41.334s. Apr 16 04:57:19.558274 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:57:19.827899 sshd[1738]: Connection closed by 10.0.0.1 port 60678 Apr 16 04:57:19.834930 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:20.011165 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:60678.service: Deactivated successfully. Apr 16 04:57:20.163777 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 04:57:20.233584 systemd-logind[1544]: Session 3 logged out. Waiting for processes to exit. Apr 16 04:57:20.412065 systemd-logind[1544]: Removed session 3. Apr 16 04:57:29.882914 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:49956.service - OpenSSH per-connection server daemon (10.0.0.1:49956). Apr 16 04:57:30.201259 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 49956 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:30.295699 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:30.512476 systemd-logind[1544]: New session 4 of user core. Apr 16 04:57:30.573396 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 04:57:30.671890 sshd[1754]: Connection closed by 10.0.0.1 port 49956 Apr 16 04:57:30.674431 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:31.023091 kubelet[1736]: E0416 04:57:31.022387 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:57:31.045237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:57:31.045439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:57:31.045866 systemd[1]: kubelet.service: Consumed 14.074s CPU time, 258.6M memory peak. Apr 16 04:57:31.046674 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:49956.service: Deactivated successfully. Apr 16 04:57:31.081464 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 04:57:31.132245 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. Apr 16 04:57:31.290810 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Apr 16 04:57:31.346708 systemd-logind[1544]: Removed session 4. Apr 16 04:57:32.258353 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:32.260575 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:32.616451 systemd-logind[1544]: New session 5 of user core. Apr 16 04:57:32.942984 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 04:57:33.502837 sshd[1764]: Connection closed by 10.0.0.1 port 49962 Apr 16 04:57:33.507813 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:33.754253 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:49962.service: Deactivated successfully. Apr 16 04:57:34.211031 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 04:57:34.421858 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. Apr 16 04:57:34.566782 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:49976.service - OpenSSH per-connection server daemon (10.0.0.1:49976). Apr 16 04:57:34.582643 systemd-logind[1544]: Removed session 5. Apr 16 04:57:35.407693 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 49976 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:35.423315 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:35.637086 systemd-logind[1544]: New session 6 of user core. Apr 16 04:57:35.693798 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 04:57:35.789496 sshd[1773]: Connection closed by 10.0.0.1 port 49976 Apr 16 04:57:35.790196 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:35.810366 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:41090.service - OpenSSH per-connection server daemon (10.0.0.1:41090). Apr 16 04:57:35.810752 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:49976.service: Deactivated successfully. Apr 16 04:57:35.828213 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 04:57:35.838474 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. Apr 16 04:57:35.840255 systemd-logind[1544]: Removed session 6. Apr 16 04:57:35.947924 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 41090 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:35.953005 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:35.979407 systemd-logind[1544]: New session 7 of user core. Apr 16 04:57:35.993222 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 04:57:36.248208 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 04:57:36.251316 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:57:36.302560 sudo[1783]: pam_unix(sudo:session): session closed for user root Apr 16 04:57:36.321024 sshd[1782]: Connection closed by 10.0.0.1 port 41090 Apr 16 04:57:36.325071 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:36.450105 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:41090.service: Deactivated successfully. Apr 16 04:57:36.457087 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 04:57:36.464829 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. Apr 16 04:57:36.468721 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:41092.service - OpenSSH per-connection server daemon (10.0.0.1:41092). Apr 16 04:57:36.475453 systemd-logind[1544]: Removed session 7. Apr 16 04:57:36.879762 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 41092 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:36.907955 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:37.092726 systemd-logind[1544]: New session 8 of user core. Apr 16 04:57:37.130789 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 04:57:37.271909 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 04:57:37.272928 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:57:37.291568 sudo[1794]: pam_unix(sudo:session): session closed for user root Apr 16 04:57:37.400848 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 04:57:37.402781 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:57:37.484719 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 04:57:37.706926 augenrules[1816]: No rules Apr 16 04:57:37.708576 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 04:57:37.708875 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 04:57:37.714671 sudo[1793]: pam_unix(sudo:session): session closed for user root Apr 16 04:57:37.767734 sshd[1792]: Connection closed by 10.0.0.1 port 41092 Apr 16 04:57:37.770028 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Apr 16 04:57:37.824579 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:41092.service: Deactivated successfully. Apr 16 04:57:37.833725 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 04:57:37.834772 systemd-logind[1544]: Session 8 logged out. Waiting for processes to exit. Apr 16 04:57:37.846481 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:41106.service - OpenSSH per-connection server daemon (10.0.0.1:41106). Apr 16 04:57:37.848531 systemd-logind[1544]: Removed session 8. Apr 16 04:57:38.268381 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 41106 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:57:38.274601 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:57:38.416205 systemd-logind[1544]: New session 9 of user core. Apr 16 04:57:38.437009 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 04:57:38.527066 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 04:57:38.531460 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 04:57:39.280221 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 04:57:39.332036 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 04:57:40.065013 dockerd[1849]: time="2026-04-16T04:57:40.064604386Z" level=info msg="Starting up" Apr 16 04:57:40.068794 dockerd[1849]: time="2026-04-16T04:57:40.067469032Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 04:57:40.129870 dockerd[1849]: time="2026-04-16T04:57:40.129532970Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 04:57:40.395436 dockerd[1849]: time="2026-04-16T04:57:40.394390499Z" level=info msg="Loading containers: start." Apr 16 04:57:40.411208 kernel: Initializing XFRM netlink socket Apr 16 04:57:41.104221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 04:57:41.106515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:57:41.198518 systemd-networkd[1530]: docker0: Link UP Apr 16 04:57:41.218411 dockerd[1849]: time="2026-04-16T04:57:41.217920398Z" level=info msg="Loading containers: done." Apr 16 04:57:41.307209 dockerd[1849]: time="2026-04-16T04:57:41.306830451Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 04:57:41.307209 dockerd[1849]: time="2026-04-16T04:57:41.306986168Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 04:57:41.307209 dockerd[1849]: time="2026-04-16T04:57:41.307218647Z" level=info msg="Initializing buildkit" Apr 16 04:57:41.366759 dockerd[1849]: time="2026-04-16T04:57:41.364686458Z" level=info msg="Completed buildkit initialization" Apr 16 04:57:41.374170 dockerd[1849]: time="2026-04-16T04:57:41.373768114Z" level=info msg="Daemon has completed initialization" Apr 16 04:57:41.374170 dockerd[1849]: time="2026-04-16T04:57:41.373889772Z" level=info msg="API listen on /run/docker.sock" Apr 16 04:57:41.376178 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 04:57:41.692542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:57:41.708978 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:57:42.201265 kubelet[2072]: E0416 04:57:42.199959 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:57:42.205817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:57:42.205921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:57:42.206681 systemd[1]: kubelet.service: Consumed 894ms CPU time, 110.9M memory peak. Apr 16 04:57:42.208073 containerd[1586]: time="2026-04-16T04:57:42.208032780Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 04:57:43.562812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045313556.mount: Deactivated successfully. Apr 16 04:57:45.381543 containerd[1586]: time="2026-04-16T04:57:45.380204536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:45.381543 containerd[1586]: time="2026-04-16T04:57:45.380486811Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 16 04:57:45.385421 containerd[1586]: time="2026-04-16T04:57:45.385331468Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:45.392581 containerd[1586]: time="2026-04-16T04:57:45.392234434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:45.394050 containerd[1586]: time="2026-04-16T04:57:45.393533900Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 3.185446382s" Apr 16 04:57:45.394050 containerd[1586]: time="2026-04-16T04:57:45.393565817Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 04:57:45.398078 containerd[1586]: time="2026-04-16T04:57:45.397831678Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 04:57:47.442780 containerd[1586]: time="2026-04-16T04:57:47.442321017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:47.448519 containerd[1586]: time="2026-04-16T04:57:47.448141073Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 16 04:57:47.450948 containerd[1586]: time="2026-04-16T04:57:47.450896332Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:47.453592 containerd[1586]: time="2026-04-16T04:57:47.453532081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:47.454748 containerd[1586]: time="2026-04-16T04:57:47.454702197Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 2.056688001s" Apr 16 04:57:47.454784 containerd[1586]: time="2026-04-16T04:57:47.454748289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 04:57:47.461680 containerd[1586]: time="2026-04-16T04:57:47.461402018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 04:57:49.439724 containerd[1586]: time="2026-04-16T04:57:49.433587444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:49.442102 containerd[1586]: time="2026-04-16T04:57:49.437132191Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 16 04:57:49.450444 containerd[1586]: time="2026-04-16T04:57:49.450372207Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:49.476421 containerd[1586]: time="2026-04-16T04:57:49.475607566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:49.485963 containerd[1586]: time="2026-04-16T04:57:49.482195690Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 2.019448616s" Apr 16 04:57:49.500305 containerd[1586]: time="2026-04-16T04:57:49.487869536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 04:57:49.508865 containerd[1586]: time="2026-04-16T04:57:49.508063645Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 04:57:51.222198 update_engine[1545]: I20260416 04:57:51.221757 1545 update_attempter.cc:509] Updating boot flags... Apr 16 04:57:51.467557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount104423966.mount: Deactivated successfully. Apr 16 04:57:51.926313 containerd[1586]: time="2026-04-16T04:57:51.925876749Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 16 04:57:51.926313 containerd[1586]: time="2026-04-16T04:57:51.926156760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:51.936955 containerd[1586]: time="2026-04-16T04:57:51.933024848Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:51.949804 containerd[1586]: time="2026-04-16T04:57:51.949420078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:51.956637 containerd[1586]: time="2026-04-16T04:57:51.956547188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 2.447633432s" Apr 16 04:57:51.956771 containerd[1586]: time="2026-04-16T04:57:51.956701795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 04:57:51.959287 containerd[1586]: time="2026-04-16T04:57:51.957793833Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 04:57:52.342836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 04:57:52.344995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:57:52.700680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:57:52.722182 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 04:57:52.756794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440564116.mount: Deactivated successfully. Apr 16 04:57:52.816832 kubelet[2182]: E0416 04:57:52.816638 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 04:57:52.822249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 04:57:52.822384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 04:57:52.823553 systemd[1]: kubelet.service: Consumed 318ms CPU time, 110.5M memory peak. Apr 16 04:57:54.049429 containerd[1586]: time="2026-04-16T04:57:54.048881309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:54.049429 containerd[1586]: time="2026-04-16T04:57:54.049239362Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 16 04:57:54.052761 containerd[1586]: time="2026-04-16T04:57:54.052215809Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:54.056823 containerd[1586]: time="2026-04-16T04:57:54.056780530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:54.057466 containerd[1586]: time="2026-04-16T04:57:54.057435184Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.099615383s" Apr 16 04:57:54.057466 containerd[1586]: time="2026-04-16T04:57:54.057465791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 04:57:54.058337 containerd[1586]: time="2026-04-16T04:57:54.058314256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 04:57:54.628844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1727876130.mount: Deactivated successfully. Apr 16 04:57:54.643769 containerd[1586]: time="2026-04-16T04:57:54.643338258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:54.650858 containerd[1586]: time="2026-04-16T04:57:54.643379101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 04:57:54.651090 containerd[1586]: time="2026-04-16T04:57:54.650911724Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:54.670795 containerd[1586]: time="2026-04-16T04:57:54.670459848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:54.672024 containerd[1586]: time="2026-04-16T04:57:54.671674642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 613.303815ms" Apr 16 04:57:54.672024 containerd[1586]: time="2026-04-16T04:57:54.671703175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 04:57:54.678767 containerd[1586]: time="2026-04-16T04:57:54.678450116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 04:57:55.250251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128488262.mount: Deactivated successfully. Apr 16 04:57:56.358803 containerd[1586]: time="2026-04-16T04:57:56.358397607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:56.358803 containerd[1586]: time="2026-04-16T04:57:56.358516008Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 16 04:57:56.364006 containerd[1586]: time="2026-04-16T04:57:56.362983143Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:56.367040 containerd[1586]: time="2026-04-16T04:57:56.366997273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:57:56.367989 containerd[1586]: time="2026-04-16T04:57:56.367954445Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.689337185s" Apr 16 04:57:56.368027 containerd[1586]: time="2026-04-16T04:57:56.368012365Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 04:57:59.247974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:57:59.248208 systemd[1]: kubelet.service: Consumed 318ms CPU time, 110.5M memory peak. Apr 16 04:57:59.252238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:57:59.318925 systemd[1]: Reload requested from client PID 2338 ('systemctl') (unit session-9.scope)... Apr 16 04:57:59.318947 systemd[1]: Reloading... Apr 16 04:57:59.442164 zram_generator::config[2378]: No configuration found. Apr 16 04:57:59.726095 systemd[1]: Reloading finished in 406 ms. Apr 16 04:57:59.796848 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 04:57:59.796991 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 04:57:59.798480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:57:59.798711 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.1M memory peak. Apr 16 04:57:59.801033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:58:00.042635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:58:00.060575 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:58:00.116541 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:58:00.116541 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:58:00.116541 kubelet[2429]: I0416 04:58:00.116435 2429 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:58:00.532577 kubelet[2429]: I0416 04:58:00.532072 2429 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 04:58:00.532577 kubelet[2429]: I0416 04:58:00.532256 2429 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:58:00.534049 kubelet[2429]: I0416 04:58:00.534007 2429 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 04:58:00.534147 kubelet[2429]: I0416 04:58:00.534041 2429 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:58:00.534895 kubelet[2429]: I0416 04:58:00.534864 2429 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:58:00.597872 kubelet[2429]: I0416 04:58:00.597667 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:58:00.599934 kubelet[2429]: E0416 04:58:00.599816 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 04:58:00.604847 kubelet[2429]: I0416 04:58:00.604805 2429 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 04:58:00.608670 kubelet[2429]: I0416 04:58:00.608653 2429 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 04:58:00.609299 kubelet[2429]: I0416 04:58:00.609254 2429 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:58:00.609474 kubelet[2429]: I0416 04:58:00.609296 2429 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 04:58:00.609572 kubelet[2429]: I0416 04:58:00.609502 2429 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:58:00.609572 kubelet[2429]: I0416 04:58:00.609510 2429 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 04:58:00.609604 kubelet[2429]: I0416 04:58:00.609588 2429 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 04:58:00.616479 kubelet[2429]: I0416 04:58:00.616285 2429 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:58:00.619811 kubelet[2429]: I0416 04:58:00.619775 2429 kubelet.go:475] "Attempting to sync node with API server" Apr 16 04:58:00.619891 kubelet[2429]: I0416 04:58:00.619857 2429 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:58:00.619972 kubelet[2429]: I0416 04:58:00.619947 2429 kubelet.go:387] "Adding apiserver pod source" Apr 16 04:58:00.620037 kubelet[2429]: I0416 04:58:00.620007 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:58:00.621157 kubelet[2429]: E0416 04:58:00.621094 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:58:00.621199 kubelet[2429]: E0416 04:58:00.621102 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:58:00.626938 kubelet[2429]: I0416 04:58:00.626721 2429 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 04:58:00.627577 kubelet[2429]: I0416 04:58:00.627539 2429 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:58:00.627577 kubelet[2429]: I0416 04:58:00.627572 2429 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 04:58:00.627733 kubelet[2429]: W0416 04:58:00.627717 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 04:58:00.631868 kubelet[2429]: I0416 04:58:00.631184 2429 server.go:1262] "Started kubelet" Apr 16 04:58:00.631868 kubelet[2429]: I0416 04:58:00.631287 2429 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:58:00.631868 kubelet[2429]: I0416 04:58:00.631452 2429 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:58:00.631868 kubelet[2429]: I0416 04:58:00.631486 2429 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 04:58:00.631868 kubelet[2429]: I0416 04:58:00.631759 2429 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:58:00.632133 kubelet[2429]: I0416 04:58:00.632069 2429 server.go:310] "Adding debug handlers to kubelet server" Apr 16 04:58:00.633667 kubelet[2429]: I0416 04:58:00.632552 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:58:00.637533 kubelet[2429]: E0416 04:58:00.636477 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6bd84b00bf6de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 04:58:00.631146206 +0000 UTC m=+0.560848328,LastTimestamp:2026-04-16 04:58:00.631146206 +0000 UTC m=+0.560848328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 04:58:00.638023 kubelet[2429]: E0416 04:58:00.637976 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:00.638023 kubelet[2429]: I0416 04:58:00.638028 2429 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 04:58:00.638222 kubelet[2429]: I0416 04:58:00.638200 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:58:00.638222 kubelet[2429]: I0416 04:58:00.638218 2429 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 04:58:00.638271 kubelet[2429]: I0416 04:58:00.638245 2429 reconciler.go:29] "Reconciler: start to sync state" Apr 16 04:58:00.638378 kubelet[2429]: E0416 04:58:00.638351 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Apr 16 04:58:00.638659 kubelet[2429]: E0416 04:58:00.638594 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:58:00.639687 kubelet[2429]: I0416 04:58:00.639518 2429 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:58:00.639687 kubelet[2429]: I0416 04:58:00.639580 2429 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:58:00.640498 kubelet[2429]: E0416 04:58:00.640475 2429 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:58:00.640709 kubelet[2429]: I0416 04:58:00.640680 2429 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:58:00.672129 kubelet[2429]: I0416 04:58:00.671929 2429 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 04:58:00.674218 kubelet[2429]: I0416 04:58:00.674177 2429 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 04:58:00.674306 kubelet[2429]: I0416 04:58:00.674251 2429 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 04:58:00.674353 kubelet[2429]: I0416 04:58:00.674340 2429 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 04:58:00.674480 kubelet[2429]: E0416 04:58:00.674384 2429 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:58:00.685219 kubelet[2429]: I0416 04:58:00.685069 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:58:00.685219 kubelet[2429]: I0416 04:58:00.685086 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:58:00.685219 kubelet[2429]: I0416 04:58:00.685104 2429 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:58:00.686606 kubelet[2429]: E0416 04:58:00.686558 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:58:00.688498 kubelet[2429]: I0416 04:58:00.688466 2429 policy_none.go:49] "None policy: Start" Apr 16 04:58:00.688498 kubelet[2429]: I0416 04:58:00.688500 2429 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 04:58:00.688563 kubelet[2429]: I0416 04:58:00.688509 2429 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 04:58:00.694588 kubelet[2429]: I0416 04:58:00.693976 2429 policy_none.go:47] "Start" Apr 16 04:58:00.712850 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 04:58:00.741095 kubelet[2429]: E0416 04:58:00.740586 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:00.745798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 04:58:00.748471 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 04:58:00.767483 kubelet[2429]: E0416 04:58:00.767291 2429 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:58:00.768179 kubelet[2429]: I0416 04:58:00.767764 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:58:00.768179 kubelet[2429]: I0416 04:58:00.767777 2429 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:58:00.768341 kubelet[2429]: I0416 04:58:00.768326 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:58:00.770285 kubelet[2429]: E0416 04:58:00.770221 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:58:00.770285 kubelet[2429]: E0416 04:58:00.770299 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 04:58:00.790879 systemd[1]: Created slice kubepods-burstable-podb6d25d55d8c478f9ff49f48ea83ff00a.slice - libcontainer container kubepods-burstable-podb6d25d55d8c478f9ff49f48ea83ff00a.slice. Apr 16 04:58:00.799661 kubelet[2429]: E0416 04:58:00.799530 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:00.804197 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 16 04:58:00.809362 kubelet[2429]: E0416 04:58:00.809319 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:00.812179 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 16 04:58:00.817872 kubelet[2429]: E0416 04:58:00.817350 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:00.847002 kubelet[2429]: E0416 04:58:00.846684 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Apr 16 04:58:00.883744 kubelet[2429]: I0416 04:58:00.883372 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:58:00.885168 kubelet[2429]: E0416 04:58:00.884762 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Apr 16 04:58:00.955407 kubelet[2429]: I0416 04:58:00.954207 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:00.957533 kubelet[2429]: I0416 04:58:00.956578 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:00.957533 kubelet[2429]: I0416 04:58:00.956688 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:00.957533 kubelet[2429]: I0416 04:58:00.956701 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:00.957533 kubelet[2429]: I0416 04:58:00.956717 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:58:00.957533 kubelet[2429]: I0416 04:58:00.956729 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6d25d55d8c478f9ff49f48ea83ff00a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6d25d55d8c478f9ff49f48ea83ff00a\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:00.957697 kubelet[2429]: I0416 04:58:00.956740 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6d25d55d8c478f9ff49f48ea83ff00a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6d25d55d8c478f9ff49f48ea83ff00a\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:00.957697 kubelet[2429]: I0416 04:58:00.956752 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6d25d55d8c478f9ff49f48ea83ff00a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6d25d55d8c478f9ff49f48ea83ff00a\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:00.957697 kubelet[2429]: I0416 04:58:00.956790 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:01.097486 kubelet[2429]: I0416 04:58:01.097444 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:58:01.097992 kubelet[2429]: E0416 04:58:01.097941 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Apr 16 04:58:01.103563 kubelet[2429]: E0416 04:58:01.103528 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:01.104782 containerd[1586]: time="2026-04-16T04:58:01.104734754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6d25d55d8c478f9ff49f48ea83ff00a,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:01.116249 kubelet[2429]: E0416 04:58:01.115939 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:01.117705 containerd[1586]: time="2026-04-16T04:58:01.117592426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:01.121663 kubelet[2429]: E0416 04:58:01.121625 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:01.122032 containerd[1586]: time="2026-04-16T04:58:01.121997335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:01.256895 kubelet[2429]: E0416 04:58:01.256695 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Apr 16 04:58:01.511719 kubelet[2429]: I0416 04:58:01.511437 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:58:01.512449 kubelet[2429]: E0416 04:58:01.512425 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Apr 16 04:58:01.661306 kubelet[2429]: E0416 04:58:01.661192 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 04:58:01.665171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319806765.mount: Deactivated successfully. Apr 16 04:58:01.683900 containerd[1586]: time="2026-04-16T04:58:01.682349452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:58:01.692177 containerd[1586]: time="2026-04-16T04:58:01.691898551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 04:58:01.693583 containerd[1586]: time="2026-04-16T04:58:01.693551647Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:58:01.696391 containerd[1586]: time="2026-04-16T04:58:01.696204128Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:58:01.696391 containerd[1586]: time="2026-04-16T04:58:01.696393625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 04:58:01.699383 containerd[1586]: time="2026-04-16T04:58:01.699288016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:58:01.706307 containerd[1586]: time="2026-04-16T04:58:01.704910665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 04:58:01.710362 containerd[1586]: time="2026-04-16T04:58:01.710135544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 04:58:01.711530 containerd[1586]: time="2026-04-16T04:58:01.711480741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.328858ms" Apr 16 04:58:01.711725 containerd[1586]: time="2026-04-16T04:58:01.711700292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.29473ms" Apr 16 04:58:01.711997 containerd[1586]: time="2026-04-16T04:58:01.711954178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.624834ms" Apr 16 04:58:01.765897 containerd[1586]: time="2026-04-16T04:58:01.763841738Z" level=info msg="connecting to shim ea9a15c8e49632af4f0e47fce9078e7c6a687555bbf5ce554761bd0e6214a5b8" address="unix:///run/containerd/s/dc7a7ce30d069dbe9ebe1d283942cbcd04e9842b63053d0f6ed28845e5ef3373" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:01.768242 containerd[1586]: time="2026-04-16T04:58:01.763811424Z" level=info msg="connecting to shim 9c8fba49d817ce3be5e6829f39bcef6bc2ba82cdda32c6e6dc998daf94731f42" address="unix:///run/containerd/s/8a2ad9f1d9366dfaa003403cb68c24250da1eb832de8ad80acc6c68bc9ed25d7" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:01.768242 containerd[1586]: time="2026-04-16T04:58:01.766853406Z" level=info msg="connecting to shim 7377b21e76316eab7ceb374bf67496658a5dce95a6b01c195bbfe30196a28792" address="unix:///run/containerd/s/8bdc8b5b9876449a9b52a53d33b1d46f434fbb2732a712052b4674ee775ad47f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:01.800507 systemd[1]: Started cri-containerd-7377b21e76316eab7ceb374bf67496658a5dce95a6b01c195bbfe30196a28792.scope - libcontainer container 7377b21e76316eab7ceb374bf67496658a5dce95a6b01c195bbfe30196a28792. Apr 16 04:58:01.804744 systemd[1]: Started cri-containerd-9c8fba49d817ce3be5e6829f39bcef6bc2ba82cdda32c6e6dc998daf94731f42.scope - libcontainer container 9c8fba49d817ce3be5e6829f39bcef6bc2ba82cdda32c6e6dc998daf94731f42. Apr 16 04:58:01.806625 systemd[1]: Started cri-containerd-ea9a15c8e49632af4f0e47fce9078e7c6a687555bbf5ce554761bd0e6214a5b8.scope - libcontainer container ea9a15c8e49632af4f0e47fce9078e7c6a687555bbf5ce554761bd0e6214a5b8. Apr 16 04:58:01.861562 kubelet[2429]: E0416 04:58:01.859036 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 04:58:01.894194 containerd[1586]: time="2026-04-16T04:58:01.893949934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c8fba49d817ce3be5e6829f39bcef6bc2ba82cdda32c6e6dc998daf94731f42\"" Apr 16 04:58:01.908238 kubelet[2429]: E0416 04:58:01.907980 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:01.909870 kubelet[2429]: E0416 04:58:01.909762 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 04:58:01.915676 containerd[1586]: time="2026-04-16T04:58:01.915481406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6d25d55d8c478f9ff49f48ea83ff00a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea9a15c8e49632af4f0e47fce9078e7c6a687555bbf5ce554761bd0e6214a5b8\"" Apr 16 04:58:01.917508 containerd[1586]: time="2026-04-16T04:58:01.916803048Z" level=info msg="CreateContainer within sandbox \"9c8fba49d817ce3be5e6829f39bcef6bc2ba82cdda32c6e6dc998daf94731f42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 04:58:01.917569 kubelet[2429]: E0416 04:58:01.916907 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:01.918710 containerd[1586]: time="2026-04-16T04:58:01.918682409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7377b21e76316eab7ceb374bf67496658a5dce95a6b01c195bbfe30196a28792\"" Apr 16 04:58:01.919300 kubelet[2429]: E0416 04:58:01.919286 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:01.923282 containerd[1586]: time="2026-04-16T04:58:01.922629592Z" level=info msg="CreateContainer within sandbox \"ea9a15c8e49632af4f0e47fce9078e7c6a687555bbf5ce554761bd0e6214a5b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 04:58:01.935164 containerd[1586]: time="2026-04-16T04:58:01.934556556Z" level=info msg="Container 56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:01.999274 kubelet[2429]: E0416 04:58:01.999068 2429 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 04:58:02.000016 containerd[1586]: time="2026-04-16T04:58:01.999873488Z" level=info msg="CreateContainer within sandbox \"7377b21e76316eab7ceb374bf67496658a5dce95a6b01c195bbfe30196a28792\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 04:58:02.000274 containerd[1586]: time="2026-04-16T04:58:02.000246196Z" level=info msg="Container c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:02.010811 containerd[1586]: time="2026-04-16T04:58:02.010440479Z" level=info msg="CreateContainer within sandbox \"9c8fba49d817ce3be5e6829f39bcef6bc2ba82cdda32c6e6dc998daf94731f42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9\"" Apr 16 04:58:02.013450 containerd[1586]: time="2026-04-16T04:58:02.013415932Z" level=info msg="StartContainer for \"56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9\"" Apr 16 04:58:02.014473 containerd[1586]: time="2026-04-16T04:58:02.014445018Z" level=info msg="CreateContainer within sandbox \"ea9a15c8e49632af4f0e47fce9078e7c6a687555bbf5ce554761bd0e6214a5b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674\"" Apr 16 04:58:02.014610 containerd[1586]: time="2026-04-16T04:58:02.014509103Z" level=info msg="connecting to shim 56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9" address="unix:///run/containerd/s/8a2ad9f1d9366dfaa003403cb68c24250da1eb832de8ad80acc6c68bc9ed25d7" protocol=ttrpc version=3 Apr 16 04:58:02.014789 containerd[1586]: time="2026-04-16T04:58:02.014758254Z" level=info msg="StartContainer for \"c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674\"" Apr 16 04:58:02.015816 containerd[1586]: time="2026-04-16T04:58:02.015794009Z" level=info msg="connecting to shim c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674" address="unix:///run/containerd/s/dc7a7ce30d069dbe9ebe1d283942cbcd04e9842b63053d0f6ed28845e5ef3373" protocol=ttrpc version=3 Apr 16 04:58:02.023068 containerd[1586]: time="2026-04-16T04:58:02.021938082Z" level=info msg="Container 02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:02.041843 containerd[1586]: time="2026-04-16T04:58:02.041780511Z" level=info msg="CreateContainer within sandbox \"7377b21e76316eab7ceb374bf67496658a5dce95a6b01c195bbfe30196a28792\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e\"" Apr 16 04:58:02.042917 containerd[1586]: time="2026-04-16T04:58:02.042813115Z" level=info msg="StartContainer for \"02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e\"" Apr 16 04:58:02.046951 systemd[1]: Started cri-containerd-56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9.scope - libcontainer container 56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9. Apr 16 04:58:02.049161 containerd[1586]: time="2026-04-16T04:58:02.048687633Z" level=info msg="connecting to shim 02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e" address="unix:///run/containerd/s/8bdc8b5b9876449a9b52a53d33b1d46f434fbb2732a712052b4674ee775ad47f" protocol=ttrpc version=3 Apr 16 04:58:02.048758 systemd[1]: Started cri-containerd-c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674.scope - libcontainer container c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674. Apr 16 04:58:02.064406 kubelet[2429]: E0416 04:58:02.064303 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Apr 16 04:58:02.082306 systemd[1]: Started cri-containerd-02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e.scope - libcontainer container 02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e. Apr 16 04:58:02.129971 containerd[1586]: time="2026-04-16T04:58:02.129781572Z" level=info msg="StartContainer for \"c9d39334da786362c31d1bf77b32282bbab3bde4f682e06b0c545dcd6093f674\" returns successfully" Apr 16 04:58:02.138141 containerd[1586]: time="2026-04-16T04:58:02.137615921Z" level=info msg="StartContainer for \"56f45922a9a0e62e74012d017113b11938e0cd3df4e1e2588dfdcea60196dda9\" returns successfully" Apr 16 04:58:02.167269 containerd[1586]: time="2026-04-16T04:58:02.167055617Z" level=info msg="StartContainer for \"02a3b53d79a8313479766d21846d7f66e4c2d85ea3f48306a2cfc3f03f7e145e\" returns successfully" Apr 16 04:58:02.326846 kubelet[2429]: I0416 04:58:02.326171 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:58:02.765324 kubelet[2429]: E0416 04:58:02.762634 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:02.765324 kubelet[2429]: E0416 04:58:02.764069 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:02.767710 kubelet[2429]: E0416 04:58:02.767614 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:02.767888 kubelet[2429]: E0416 04:58:02.767859 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:02.780631 kubelet[2429]: E0416 04:58:02.780431 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:02.781701 kubelet[2429]: E0416 04:58:02.780696 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:03.331746 kubelet[2429]: I0416 04:58:03.331306 2429 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:58:03.337066 kubelet[2429]: E0416 04:58:03.335030 2429 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 04:58:03.389878 kubelet[2429]: E0416 04:58:03.389272 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:03.492016 kubelet[2429]: E0416 04:58:03.491281 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:03.596374 kubelet[2429]: E0416 04:58:03.595793 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:03.698767 kubelet[2429]: E0416 04:58:03.698135 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:03.793086 kubelet[2429]: E0416 04:58:03.792847 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:03.794105 kubelet[2429]: E0416 04:58:03.793225 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:03.794105 kubelet[2429]: E0416 04:58:03.793467 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:03.794105 kubelet[2429]: E0416 04:58:03.793509 2429 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 04:58:03.794105 kubelet[2429]: E0416 04:58:03.793544 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:03.794105 kubelet[2429]: E0416 04:58:03.793609 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:03.801671 kubelet[2429]: E0416 04:58:03.801185 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:03.904978 kubelet[2429]: E0416 04:58:03.903796 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:04.016538 kubelet[2429]: E0416 04:58:04.015931 2429 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 04:58:04.140136 kubelet[2429]: I0416 04:58:04.139611 2429 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:04.152919 kubelet[2429]: I0416 04:58:04.152554 2429 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:04.156373 kubelet[2429]: I0416 04:58:04.156061 2429 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:58:04.642184 kubelet[2429]: I0416 04:58:04.641714 2429 apiserver.go:52] "Watching apiserver" Apr 16 04:58:04.740402 kubelet[2429]: I0416 04:58:04.739751 2429 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 04:58:04.793423 kubelet[2429]: E0416 04:58:04.792780 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:04.793423 kubelet[2429]: E0416 04:58:04.792780 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:04.793423 kubelet[2429]: E0416 04:58:04.792861 2429 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:05.514283 systemd[1]: Reload requested from client PID 2721 ('systemctl') (unit session-9.scope)... Apr 16 04:58:05.514301 systemd[1]: Reloading... Apr 16 04:58:05.646253 zram_generator::config[2764]: No configuration found. Apr 16 04:58:06.016295 systemd[1]: Reloading finished in 501 ms. Apr 16 04:58:06.046345 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:58:06.047080 kubelet[2429]: I0416 04:58:06.046844 2429 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:58:06.072519 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 04:58:06.072850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:58:06.073027 systemd[1]: kubelet.service: Consumed 2.242s CPU time, 125.4M memory peak. Apr 16 04:58:06.081932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 04:58:06.393258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 04:58:06.412875 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 04:58:06.576209 sudo[2821]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 16 04:58:06.576586 sudo[2821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 16 04:58:06.634276 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 04:58:06.634276 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 04:58:06.634276 kubelet[2809]: I0416 04:58:06.633924 2809 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 04:58:06.639708 kubelet[2809]: I0416 04:58:06.639639 2809 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 04:58:06.639708 kubelet[2809]: I0416 04:58:06.639666 2809 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 04:58:06.639832 kubelet[2809]: I0416 04:58:06.639741 2809 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 04:58:06.639832 kubelet[2809]: I0416 04:58:06.639750 2809 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 04:58:06.639905 kubelet[2809]: I0416 04:58:06.639895 2809 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 04:58:06.651945 kubelet[2809]: I0416 04:58:06.650619 2809 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 04:58:06.655934 kubelet[2809]: I0416 04:58:06.655902 2809 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 04:58:06.677221 kubelet[2809]: I0416 04:58:06.677040 2809 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 04:58:06.715339 kubelet[2809]: I0416 04:58:06.714779 2809 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 04:58:06.717019 kubelet[2809]: I0416 04:58:06.716773 2809 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 04:58:06.717353 kubelet[2809]: I0416 04:58:06.717027 2809 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 04:58:06.717465 kubelet[2809]: I0416 04:58:06.717389 2809 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 04:58:06.717465 kubelet[2809]: I0416 04:58:06.717402 2809 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 04:58:06.717529 kubelet[2809]: I0416 04:58:06.717508 2809 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 04:58:06.717990 kubelet[2809]: I0416 04:58:06.717951 2809 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:58:06.719313 kubelet[2809]: I0416 04:58:06.718597 2809 kubelet.go:475] "Attempting to sync node with API server" Apr 16 04:58:06.723940 kubelet[2809]: I0416 04:58:06.723336 2809 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 04:58:06.727382 kubelet[2809]: I0416 04:58:06.724799 2809 kubelet.go:387] "Adding apiserver pod source" Apr 16 04:58:06.727382 kubelet[2809]: I0416 04:58:06.724898 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 04:58:06.802253 kubelet[2809]: I0416 04:58:06.799804 2809 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 04:58:06.802253 kubelet[2809]: I0416 04:58:06.800603 2809 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 04:58:06.802253 kubelet[2809]: I0416 04:58:06.800627 2809 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 04:58:06.813760 kubelet[2809]: I0416 04:58:06.812103 2809 server.go:1262] "Started kubelet" Apr 16 04:58:06.813854 kubelet[2809]: I0416 04:58:06.813315 2809 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 04:58:06.816006 kubelet[2809]: I0416 04:58:06.814934 2809 server.go:310] "Adding debug handlers to kubelet server" Apr 16 04:58:06.830584 kubelet[2809]: I0416 04:58:06.813627 2809 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 04:58:06.832205 kubelet[2809]: I0416 04:58:06.831689 2809 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 04:58:06.832205 kubelet[2809]: I0416 04:58:06.831995 2809 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 04:58:06.837149 kubelet[2809]: I0416 04:58:06.836660 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 04:58:06.843617 kubelet[2809]: I0416 04:58:06.843387 2809 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 04:58:06.848029 kubelet[2809]: I0416 04:58:06.847803 2809 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 04:58:06.857271 kubelet[2809]: I0416 04:58:06.854065 2809 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 04:58:06.859537 kubelet[2809]: I0416 04:58:06.857332 2809 reconciler.go:29] "Reconciler: start to sync state" Apr 16 04:58:06.864318 kubelet[2809]: I0416 04:58:06.864238 2809 factory.go:223] Registration of the systemd container factory successfully Apr 16 04:58:06.864597 kubelet[2809]: I0416 04:58:06.864452 2809 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 04:58:06.864975 kubelet[2809]: E0416 04:58:06.864950 2809 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 04:58:06.868496 kubelet[2809]: I0416 04:58:06.867376 2809 factory.go:223] Registration of the containerd container factory successfully Apr 16 04:58:06.877190 kubelet[2809]: I0416 04:58:06.877154 2809 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 04:58:06.879000 kubelet[2809]: I0416 04:58:06.878861 2809 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 04:58:06.879000 kubelet[2809]: I0416 04:58:06.878901 2809 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 04:58:06.879000 kubelet[2809]: I0416 04:58:06.878980 2809 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 04:58:06.879140 kubelet[2809]: E0416 04:58:06.879097 2809 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954408 2809 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954450 2809 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954596 2809 state_mem.go:36] "Initialized new in-memory state store" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954920 2809 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954929 2809 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954943 2809 policy_none.go:49] "None policy: Start" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954953 2809 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 04:58:06.955030 kubelet[2809]: I0416 04:58:06.954962 2809 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 04:58:06.956542 kubelet[2809]: I0416 04:58:06.955080 2809 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 04:58:06.956542 kubelet[2809]: I0416 04:58:06.955085 2809 policy_none.go:47] "Start" Apr 16 04:58:06.980092 kubelet[2809]: E0416 04:58:06.979891 2809 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 04:58:06.985006 kubelet[2809]: E0416 04:58:06.984892 2809 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 04:58:06.988792 kubelet[2809]: I0416 04:58:06.988670 2809 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 04:58:06.989265 kubelet[2809]: I0416 04:58:06.988760 2809 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 04:58:06.989796 kubelet[2809]: I0416 04:58:06.989658 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 04:58:07.006255 kubelet[2809]: E0416 04:58:07.006033 2809 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 04:58:07.120527 sudo[2821]: pam_unix(sudo:session): session closed for user root Apr 16 04:58:07.152462 kubelet[2809]: I0416 04:58:07.152055 2809 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 04:58:07.163483 kubelet[2809]: I0416 04:58:07.163237 2809 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 04:58:07.163483 kubelet[2809]: I0416 04:58:07.163462 2809 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 04:58:07.187661 kubelet[2809]: I0416 04:58:07.186839 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:07.187661 kubelet[2809]: I0416 04:58:07.186851 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 04:58:07.200853 kubelet[2809]: I0416 04:58:07.200671 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.200853 kubelet[2809]: E0416 04:58:07.200621 2809 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 04:58:07.213344 kubelet[2809]: E0416 04:58:07.212564 2809 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:07.217165 kubelet[2809]: E0416 04:58:07.212948 2809 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.384430 kubelet[2809]: I0416 04:58:07.383960 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.384430 kubelet[2809]: I0416 04:58:07.384091 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.384430 kubelet[2809]: I0416 04:58:07.384211 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 04:58:07.384430 kubelet[2809]: I0416 04:58:07.384224 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6d25d55d8c478f9ff49f48ea83ff00a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6d25d55d8c478f9ff49f48ea83ff00a\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:07.384430 kubelet[2809]: I0416 04:58:07.384240 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6d25d55d8c478f9ff49f48ea83ff00a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6d25d55d8c478f9ff49f48ea83ff00a\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:07.387605 kubelet[2809]: I0416 04:58:07.384261 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6d25d55d8c478f9ff49f48ea83ff00a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6d25d55d8c478f9ff49f48ea83ff00a\") " pod="kube-system/kube-apiserver-localhost" Apr 16 04:58:07.387605 kubelet[2809]: I0416 04:58:07.384278 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.387605 kubelet[2809]: I0416 04:58:07.384295 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.387605 kubelet[2809]: I0416 04:58:07.384316 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 04:58:07.513590 kubelet[2809]: E0416 04:58:07.508653 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:07.517829 kubelet[2809]: E0416 04:58:07.517779 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:07.517829 kubelet[2809]: E0416 04:58:07.517797 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:07.732879 kubelet[2809]: I0416 04:58:07.732209 2809 apiserver.go:52] "Watching apiserver" Apr 16 04:58:07.757762 kubelet[2809]: I0416 04:58:07.757394 2809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 04:58:07.924195 kubelet[2809]: E0416 04:58:07.923625 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:07.924195 kubelet[2809]: E0416 04:58:07.924057 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:07.925336 kubelet[2809]: E0416 04:58:07.924344 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:08.040553 kubelet[2809]: I0416 04:58:08.040237 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.040093186 podStartE2EDuration="4.040093186s" podCreationTimestamp="2026-04-16 04:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:58:08.026180128 +0000 UTC m=+1.579716456" watchObservedRunningTime="2026-04-16 04:58:08.040093186 +0000 UTC m=+1.593629510" Apr 16 04:58:08.054598 kubelet[2809]: I0416 04:58:08.054321 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.054224236 podStartE2EDuration="4.054224236s" podCreationTimestamp="2026-04-16 04:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:58:08.040690041 +0000 UTC m=+1.594226369" watchObservedRunningTime="2026-04-16 04:58:08.054224236 +0000 UTC m=+1.607760556" Apr 16 04:58:08.059049 kubelet[2809]: I0416 04:58:08.054793 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.05478289 podStartE2EDuration="4.05478289s" podCreationTimestamp="2026-04-16 04:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:58:08.054204176 +0000 UTC m=+1.607740494" watchObservedRunningTime="2026-04-16 04:58:08.05478289 +0000 UTC m=+1.608319215" Apr 16 04:58:08.825241 sudo[1829]: pam_unix(sudo:session): session closed for user root Apr 16 04:58:08.836402 sshd[1828]: Connection closed by 10.0.0.1 port 41106 Apr 16 04:58:08.838665 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:08.882067 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:41106.service: Deactivated successfully. Apr 16 04:58:08.888782 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 04:58:08.888986 systemd[1]: session-9.scope: Consumed 5.956s CPU time, 271.6M memory peak. Apr 16 04:58:08.890735 systemd-logind[1544]: Session 9 logged out. Waiting for processes to exit. Apr 16 04:58:08.895831 systemd-logind[1544]: Removed session 9. Apr 16 04:58:08.947253 kubelet[2809]: E0416 04:58:08.946856 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:08.947253 kubelet[2809]: E0416 04:58:08.946871 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:10.746506 kubelet[2809]: I0416 04:58:10.746310 2809 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 04:58:10.747686 kubelet[2809]: I0416 04:58:10.747142 2809 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 04:58:10.747717 containerd[1586]: time="2026-04-16T04:58:10.746879405Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 04:58:11.629208 systemd[1]: Created slice kubepods-besteffort-pod2e0f5309_fd95_45fb_b7cd_d5e3ec597ced.slice - libcontainer container kubepods-besteffort-pod2e0f5309_fd95_45fb_b7cd_d5e3ec597ced.slice. Apr 16 04:58:11.657310 systemd[1]: Created slice kubepods-burstable-podbde4ab81_4b19_4648_9e7a_bfa712df5b4d.slice - libcontainer container kubepods-burstable-podbde4ab81_4b19_4648_9e7a_bfa712df5b4d.slice. Apr 16 04:58:11.712868 kubelet[2809]: I0416 04:58:11.712524 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-config-path\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.712868 kubelet[2809]: I0416 04:58:11.712719 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-net\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.712868 kubelet[2809]: I0416 04:58:11.712812 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e0f5309-fd95-45fb-b7cd-d5e3ec597ced-kube-proxy\") pod \"kube-proxy-kzmln\" (UID: \"2e0f5309-fd95-45fb-b7cd-d5e3ec597ced\") " pod="kube-system/kube-proxy-kzmln" Apr 16 04:58:11.712868 kubelet[2809]: I0416 04:58:11.712841 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-run\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.712868 kubelet[2809]: I0416 04:58:11.712912 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-cgroup\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.712868 kubelet[2809]: I0416 04:58:11.712924 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-etc-cni-netd\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.713970 kubelet[2809]: I0416 04:58:11.712936 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hubble-tls\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.713970 kubelet[2809]: I0416 04:58:11.712951 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4zk\" (UniqueName: \"kubernetes.io/projected/2e0f5309-fd95-45fb-b7cd-d5e3ec597ced-kube-api-access-5s4zk\") pod \"kube-proxy-kzmln\" (UID: \"2e0f5309-fd95-45fb-b7cd-d5e3ec597ced\") " pod="kube-system/kube-proxy-kzmln" Apr 16 04:58:11.713970 kubelet[2809]: I0416 04:58:11.712963 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cni-path\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.713970 kubelet[2809]: I0416 04:58:11.712984 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvsm\" (UniqueName: \"kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-kube-api-access-qzvsm\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.713970 kubelet[2809]: I0416 04:58:11.713004 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hostproc\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.713970 kubelet[2809]: I0416 04:58:11.713015 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-lib-modules\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.714077 kubelet[2809]: I0416 04:58:11.713027 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-clustermesh-secrets\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.714077 kubelet[2809]: I0416 04:58:11.713046 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-kernel\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.714077 kubelet[2809]: I0416 04:58:11.713069 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e0f5309-fd95-45fb-b7cd-d5e3ec597ced-xtables-lock\") pod \"kube-proxy-kzmln\" (UID: \"2e0f5309-fd95-45fb-b7cd-d5e3ec597ced\") " pod="kube-system/kube-proxy-kzmln" Apr 16 04:58:11.714077 kubelet[2809]: I0416 04:58:11.713085 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e0f5309-fd95-45fb-b7cd-d5e3ec597ced-lib-modules\") pod \"kube-proxy-kzmln\" (UID: \"2e0f5309-fd95-45fb-b7cd-d5e3ec597ced\") " pod="kube-system/kube-proxy-kzmln" Apr 16 04:58:11.714077 kubelet[2809]: I0416 04:58:11.713097 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-bpf-maps\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:11.714077 kubelet[2809]: I0416 04:58:11.713155 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-xtables-lock\") pod \"cilium-wwfpc\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " pod="kube-system/cilium-wwfpc" Apr 16 04:58:12.009168 kubelet[2809]: E0416 04:58:12.004858 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:12.009168 kubelet[2809]: E0416 04:58:12.005975 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:12.009685 containerd[1586]: time="2026-04-16T04:58:12.007447894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kzmln,Uid:2e0f5309-fd95-45fb-b7cd-d5e3ec597ced,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:12.009685 containerd[1586]: time="2026-04-16T04:58:12.007474129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwfpc,Uid:bde4ab81-4b19-4648-9e7a-bfa712df5b4d,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:12.045205 systemd[1]: Created slice kubepods-besteffort-podc87fdd58_6e76_4e5f_a092_b9a7291a0eef.slice - libcontainer container kubepods-besteffort-podc87fdd58_6e76_4e5f_a092_b9a7291a0eef.slice. Apr 16 04:58:12.078463 containerd[1586]: time="2026-04-16T04:58:12.078401964Z" level=info msg="connecting to shim 885c61df9c7f194f98e2bccda2a3630293e567287aeb5d5c1904c190daf36af4" address="unix:///run/containerd/s/03d496b2099e1b8cdef427ceb0db84ff8dc09ffc968fb6dae75379ec558a7794" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:12.088417 containerd[1586]: time="2026-04-16T04:58:12.088153309Z" level=info msg="connecting to shim b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd" address="unix:///run/containerd/s/5395ae5389194ba2f93dc66774da6c39851966813dfcbd592f003a6dfa4dab44" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:12.121950 kubelet[2809]: I0416 04:58:12.121732 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-zxv5v\" (UID: \"c87fdd58-6e76-4e5f-a092-b9a7291a0eef\") " pod="kube-system/cilium-operator-6f9c7c5859-zxv5v" Apr 16 04:58:12.121950 kubelet[2809]: I0416 04:58:12.121850 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lc7w\" (UniqueName: \"kubernetes.io/projected/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-kube-api-access-5lc7w\") pod \"cilium-operator-6f9c7c5859-zxv5v\" (UID: \"c87fdd58-6e76-4e5f-a092-b9a7291a0eef\") " pod="kube-system/cilium-operator-6f9c7c5859-zxv5v" Apr 16 04:58:12.127485 systemd[1]: Started cri-containerd-885c61df9c7f194f98e2bccda2a3630293e567287aeb5d5c1904c190daf36af4.scope - libcontainer container 885c61df9c7f194f98e2bccda2a3630293e567287aeb5d5c1904c190daf36af4. Apr 16 04:58:12.130289 systemd[1]: Started cri-containerd-b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd.scope - libcontainer container b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd. Apr 16 04:58:12.169837 containerd[1586]: time="2026-04-16T04:58:12.169554170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwfpc,Uid:bde4ab81-4b19-4648-9e7a-bfa712df5b4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\"" Apr 16 04:58:12.172481 containerd[1586]: time="2026-04-16T04:58:12.172447202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kzmln,Uid:2e0f5309-fd95-45fb-b7cd-d5e3ec597ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"885c61df9c7f194f98e2bccda2a3630293e567287aeb5d5c1904c190daf36af4\"" Apr 16 04:58:12.172541 kubelet[2809]: E0416 04:58:12.172513 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:12.172888 kubelet[2809]: E0416 04:58:12.172869 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:12.173595 containerd[1586]: time="2026-04-16T04:58:12.173560468Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 16 04:58:12.182900 containerd[1586]: time="2026-04-16T04:58:12.182629540Z" level=info msg="CreateContainer within sandbox \"885c61df9c7f194f98e2bccda2a3630293e567287aeb5d5c1904c190daf36af4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 04:58:12.218401 containerd[1586]: time="2026-04-16T04:58:12.217948608Z" level=info msg="Container 0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:12.236783 containerd[1586]: time="2026-04-16T04:58:12.236652598Z" level=info msg="CreateContainer within sandbox \"885c61df9c7f194f98e2bccda2a3630293e567287aeb5d5c1904c190daf36af4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946\"" Apr 16 04:58:12.237889 containerd[1586]: time="2026-04-16T04:58:12.237857852Z" level=info msg="StartContainer for \"0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946\"" Apr 16 04:58:12.239240 containerd[1586]: time="2026-04-16T04:58:12.239213972Z" level=info msg="connecting to shim 0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946" address="unix:///run/containerd/s/03d496b2099e1b8cdef427ceb0db84ff8dc09ffc968fb6dae75379ec558a7794" protocol=ttrpc version=3 Apr 16 04:58:12.262247 systemd[1]: Started cri-containerd-0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946.scope - libcontainer container 0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946. Apr 16 04:58:12.347287 containerd[1586]: time="2026-04-16T04:58:12.346980831Z" level=info msg="StartContainer for \"0130ffa442374ac8b81a7be030df7edf723cc457e94e5d736042ccb03059f946\" returns successfully" Apr 16 04:58:12.355312 kubelet[2809]: E0416 04:58:12.355152 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:12.362417 containerd[1586]: time="2026-04-16T04:58:12.362169306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zxv5v,Uid:c87fdd58-6e76-4e5f-a092-b9a7291a0eef,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:12.396931 containerd[1586]: time="2026-04-16T04:58:12.396423098Z" level=info msg="connecting to shim 43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa" address="unix:///run/containerd/s/594a3673e8f1433964b239e623ccfc7fe1551d7b5cd17abf016c377df97532a4" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:12.426324 systemd[1]: Started cri-containerd-43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa.scope - libcontainer container 43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa. Apr 16 04:58:12.477229 containerd[1586]: time="2026-04-16T04:58:12.476861173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zxv5v,Uid:c87fdd58-6e76-4e5f-a092-b9a7291a0eef,Namespace:kube-system,Attempt:0,} returns sandbox id \"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\"" Apr 16 04:58:12.484662 kubelet[2809]: E0416 04:58:12.484425 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:13.037905 kubelet[2809]: E0416 04:58:13.037462 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:13.048797 kubelet[2809]: E0416 04:58:13.048490 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:13.095617 kubelet[2809]: I0416 04:58:13.095225 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kzmln" podStartSLOduration=2.095093108 podStartE2EDuration="2.095093108s" podCreationTimestamp="2026-04-16 04:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:58:13.048774206 +0000 UTC m=+6.602310539" watchObservedRunningTime="2026-04-16 04:58:13.095093108 +0000 UTC m=+6.648629436" Apr 16 04:58:14.043180 kubelet[2809]: E0416 04:58:14.042875 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:14.832391 kubelet[2809]: E0416 04:58:14.832045 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:15.108345 kubelet[2809]: E0416 04:58:15.107022 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:16.270167 kubelet[2809]: E0416 04:58:16.269879 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:17.019771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2310873918.mount: Deactivated successfully. Apr 16 04:58:17.097365 kubelet[2809]: E0416 04:58:17.094225 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:18.101793 kubelet[2809]: E0416 04:58:18.101152 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:19.639855 containerd[1586]: time="2026-04-16T04:58:19.639561925Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:58:19.639855 containerd[1586]: time="2026-04-16T04:58:19.639865776Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 16 04:58:19.641470 containerd[1586]: time="2026-04-16T04:58:19.641071308Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:58:19.644096 containerd[1586]: time="2026-04-16T04:58:19.643786707Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.470148921s" Apr 16 04:58:19.644096 containerd[1586]: time="2026-04-16T04:58:19.643959829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 16 04:58:19.646434 containerd[1586]: time="2026-04-16T04:58:19.646381188Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 16 04:58:19.667313 containerd[1586]: time="2026-04-16T04:58:19.667021413Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 04:58:19.683912 containerd[1586]: time="2026-04-16T04:58:19.683331642Z" level=info msg="Container 9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:19.695965 containerd[1586]: time="2026-04-16T04:58:19.695708047Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\"" Apr 16 04:58:19.698103 containerd[1586]: time="2026-04-16T04:58:19.698025730Z" level=info msg="StartContainer for \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\"" Apr 16 04:58:19.705811 containerd[1586]: time="2026-04-16T04:58:19.705536237Z" level=info msg="connecting to shim 9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c" address="unix:///run/containerd/s/5395ae5389194ba2f93dc66774da6c39851966813dfcbd592f003a6dfa4dab44" protocol=ttrpc version=3 Apr 16 04:58:19.752288 systemd[1]: Started cri-containerd-9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c.scope - libcontainer container 9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c. Apr 16 04:58:19.775560 containerd[1586]: time="2026-04-16T04:58:19.775515046Z" level=info msg="StartContainer for \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\" returns successfully" Apr 16 04:58:19.785747 systemd[1]: cri-containerd-9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c.scope: Deactivated successfully. Apr 16 04:58:19.790652 containerd[1586]: time="2026-04-16T04:58:19.790543325Z" level=info msg="received container exit event container_id:\"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\" id:\"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\" pid:3243 exited_at:{seconds:1776315499 nanos:786391739}" Apr 16 04:58:19.830207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c-rootfs.mount: Deactivated successfully. Apr 16 04:58:20.127756 kubelet[2809]: E0416 04:58:20.127541 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:20.141070 containerd[1586]: time="2026-04-16T04:58:20.140837447Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 04:58:20.158686 containerd[1586]: time="2026-04-16T04:58:20.158509878Z" level=info msg="Container ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:20.169733 containerd[1586]: time="2026-04-16T04:58:20.169487303Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\"" Apr 16 04:58:20.172145 containerd[1586]: time="2026-04-16T04:58:20.171593107Z" level=info msg="StartContainer for \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\"" Apr 16 04:58:20.172743 containerd[1586]: time="2026-04-16T04:58:20.172694271Z" level=info msg="connecting to shim ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7" address="unix:///run/containerd/s/5395ae5389194ba2f93dc66774da6c39851966813dfcbd592f003a6dfa4dab44" protocol=ttrpc version=3 Apr 16 04:58:20.195359 systemd[1]: Started cri-containerd-ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7.scope - libcontainer container ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7. Apr 16 04:58:20.232148 containerd[1586]: time="2026-04-16T04:58:20.231934298Z" level=info msg="StartContainer for \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\" returns successfully" Apr 16 04:58:20.244167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 04:58:20.244345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:58:20.244624 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:58:20.246802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 04:58:20.248992 systemd[1]: cri-containerd-ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7.scope: Deactivated successfully. Apr 16 04:58:20.257931 containerd[1586]: time="2026-04-16T04:58:20.257661846Z" level=info msg="received container exit event container_id:\"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\" id:\"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\" pid:3291 exited_at:{seconds:1776315500 nanos:249535998}" Apr 16 04:58:20.283612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 04:58:21.125392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367580680.mount: Deactivated successfully. Apr 16 04:58:21.148367 kubelet[2809]: E0416 04:58:21.147580 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:21.176447 containerd[1586]: time="2026-04-16T04:58:21.174886004Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 04:58:21.196684 containerd[1586]: time="2026-04-16T04:58:21.195476233Z" level=info msg="Container f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:21.222530 containerd[1586]: time="2026-04-16T04:58:21.222316286Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\"" Apr 16 04:58:21.225245 containerd[1586]: time="2026-04-16T04:58:21.225215478Z" level=info msg="StartContainer for \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\"" Apr 16 04:58:21.230225 containerd[1586]: time="2026-04-16T04:58:21.229031299Z" level=info msg="connecting to shim f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd" address="unix:///run/containerd/s/5395ae5389194ba2f93dc66774da6c39851966813dfcbd592f003a6dfa4dab44" protocol=ttrpc version=3 Apr 16 04:58:21.260966 systemd[1]: Started cri-containerd-f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd.scope - libcontainer container f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd. Apr 16 04:58:21.320191 systemd[1]: cri-containerd-f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd.scope: Deactivated successfully. Apr 16 04:58:21.324887 containerd[1586]: time="2026-04-16T04:58:21.324842060Z" level=info msg="received container exit event container_id:\"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\" id:\"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\" pid:3352 exited_at:{seconds:1776315501 nanos:323474770}" Apr 16 04:58:21.325251 containerd[1586]: time="2026-04-16T04:58:21.325231081Z" level=info msg="StartContainer for \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\" returns successfully" Apr 16 04:58:21.557625 containerd[1586]: time="2026-04-16T04:58:21.556818909Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:58:21.557625 containerd[1586]: time="2026-04-16T04:58:21.557274838Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 16 04:58:21.558653 containerd[1586]: time="2026-04-16T04:58:21.558625800Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:58:21.564572 containerd[1586]: time="2026-04-16T04:58:21.564320118Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.917906646s" Apr 16 04:58:21.564572 containerd[1586]: time="2026-04-16T04:58:21.564476340Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 16 04:58:21.573829 containerd[1586]: time="2026-04-16T04:58:21.573642309Z" level=info msg="CreateContainer within sandbox \"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 16 04:58:21.583542 containerd[1586]: time="2026-04-16T04:58:21.583355143Z" level=info msg="Container a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:21.596511 containerd[1586]: time="2026-04-16T04:58:21.596329570Z" level=info msg="CreateContainer within sandbox \"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\"" Apr 16 04:58:21.598156 containerd[1586]: time="2026-04-16T04:58:21.598103310Z" level=info msg="StartContainer for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\"" Apr 16 04:58:21.599212 containerd[1586]: time="2026-04-16T04:58:21.599163886Z" level=info msg="connecting to shim a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497" address="unix:///run/containerd/s/594a3673e8f1433964b239e623ccfc7fe1551d7b5cd17abf016c377df97532a4" protocol=ttrpc version=3 Apr 16 04:58:21.624420 systemd[1]: Started cri-containerd-a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497.scope - libcontainer container a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497. Apr 16 04:58:21.661601 containerd[1586]: time="2026-04-16T04:58:21.661370631Z" level=info msg="StartContainer for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" returns successfully" Apr 16 04:58:22.169197 kubelet[2809]: E0416 04:58:22.168724 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:22.196597 kubelet[2809]: E0416 04:58:22.196359 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:22.218593 containerd[1586]: time="2026-04-16T04:58:22.218062634Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 04:58:22.255743 containerd[1586]: time="2026-04-16T04:58:22.255692191Z" level=info msg="Container 209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:22.263518 containerd[1586]: time="2026-04-16T04:58:22.263468642Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\"" Apr 16 04:58:22.270013 kubelet[2809]: I0416 04:58:22.269553 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-zxv5v" podStartSLOduration=2.18868799 podStartE2EDuration="11.269497043s" podCreationTimestamp="2026-04-16 04:58:11 +0000 UTC" firstStartedPulling="2026-04-16 04:58:12.485767036 +0000 UTC m=+6.039303359" lastFinishedPulling="2026-04-16 04:58:21.566576095 +0000 UTC m=+15.120112412" observedRunningTime="2026-04-16 04:58:22.220446786 +0000 UTC m=+15.773983103" watchObservedRunningTime="2026-04-16 04:58:22.269497043 +0000 UTC m=+15.823033370" Apr 16 04:58:22.272171 containerd[1586]: time="2026-04-16T04:58:22.271403604Z" level=info msg="StartContainer for \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\"" Apr 16 04:58:22.273148 containerd[1586]: time="2026-04-16T04:58:22.273090455Z" level=info msg="connecting to shim 209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48" address="unix:///run/containerd/s/5395ae5389194ba2f93dc66774da6c39851966813dfcbd592f003a6dfa4dab44" protocol=ttrpc version=3 Apr 16 04:58:22.292485 systemd[1]: Started cri-containerd-209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48.scope - libcontainer container 209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48. Apr 16 04:58:22.336050 systemd[1]: cri-containerd-209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48.scope: Deactivated successfully. Apr 16 04:58:22.337643 containerd[1586]: time="2026-04-16T04:58:22.337591562Z" level=info msg="received container exit event container_id:\"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\" id:\"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\" pid:3432 exited_at:{seconds:1776315502 nanos:336486172}" Apr 16 04:58:22.355844 containerd[1586]: time="2026-04-16T04:58:22.355610149Z" level=info msg="StartContainer for \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\" returns successfully" Apr 16 04:58:22.685352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48-rootfs.mount: Deactivated successfully. Apr 16 04:58:23.210779 kubelet[2809]: E0416 04:58:23.210558 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:23.210779 kubelet[2809]: E0416 04:58:23.210586 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:23.221849 containerd[1586]: time="2026-04-16T04:58:23.221788120Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 04:58:23.244593 containerd[1586]: time="2026-04-16T04:58:23.244339425Z" level=info msg="Container 87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:23.252721 containerd[1586]: time="2026-04-16T04:58:23.252670315Z" level=info msg="CreateContainer within sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\"" Apr 16 04:58:23.254059 containerd[1586]: time="2026-04-16T04:58:23.254032952Z" level=info msg="StartContainer for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\"" Apr 16 04:58:23.254901 containerd[1586]: time="2026-04-16T04:58:23.254866808Z" level=info msg="connecting to shim 87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60" address="unix:///run/containerd/s/5395ae5389194ba2f93dc66774da6c39851966813dfcbd592f003a6dfa4dab44" protocol=ttrpc version=3 Apr 16 04:58:23.274319 systemd[1]: Started cri-containerd-87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60.scope - libcontainer container 87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60. Apr 16 04:58:23.320888 containerd[1586]: time="2026-04-16T04:58:23.320756821Z" level=info msg="StartContainer for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" returns successfully" Apr 16 04:58:23.537211 kubelet[2809]: I0416 04:58:23.536057 2809 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 04:58:23.622650 systemd[1]: Created slice kubepods-burstable-pod543f3db5_6da8_40a6_a4f7_473fe7520011.slice - libcontainer container kubepods-burstable-pod543f3db5_6da8_40a6_a4f7_473fe7520011.slice. Apr 16 04:58:23.645400 systemd[1]: Created slice kubepods-burstable-poda72df67a_2614_405b_b41e_c848003ef6da.slice - libcontainer container kubepods-burstable-poda72df67a_2614_405b_b41e_c848003ef6da.slice. Apr 16 04:58:23.740656 kubelet[2809]: I0416 04:58:23.740234 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zbr\" (UniqueName: \"kubernetes.io/projected/543f3db5-6da8-40a6-a4f7-473fe7520011-kube-api-access-n8zbr\") pod \"coredns-66bc5c9577-fp2qx\" (UID: \"543f3db5-6da8-40a6-a4f7-473fe7520011\") " pod="kube-system/coredns-66bc5c9577-fp2qx" Apr 16 04:58:23.740656 kubelet[2809]: I0416 04:58:23.740408 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgw9n\" (UniqueName: \"kubernetes.io/projected/a72df67a-2614-405b-b41e-c848003ef6da-kube-api-access-kgw9n\") pod \"coredns-66bc5c9577-rsxbd\" (UID: \"a72df67a-2614-405b-b41e-c848003ef6da\") " pod="kube-system/coredns-66bc5c9577-rsxbd" Apr 16 04:58:23.740656 kubelet[2809]: I0416 04:58:23.740443 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a72df67a-2614-405b-b41e-c848003ef6da-config-volume\") pod \"coredns-66bc5c9577-rsxbd\" (UID: \"a72df67a-2614-405b-b41e-c848003ef6da\") " pod="kube-system/coredns-66bc5c9577-rsxbd" Apr 16 04:58:23.740656 kubelet[2809]: I0416 04:58:23.740477 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543f3db5-6da8-40a6-a4f7-473fe7520011-config-volume\") pod \"coredns-66bc5c9577-fp2qx\" (UID: \"543f3db5-6da8-40a6-a4f7-473fe7520011\") " pod="kube-system/coredns-66bc5c9577-fp2qx" Apr 16 04:58:23.953203 kubelet[2809]: E0416 04:58:23.952924 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:23.954313 kubelet[2809]: E0416 04:58:23.954255 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:23.955846 containerd[1586]: time="2026-04-16T04:58:23.955756669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rsxbd,Uid:a72df67a-2614-405b-b41e-c848003ef6da,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:23.956062 containerd[1586]: time="2026-04-16T04:58:23.955794967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fp2qx,Uid:543f3db5-6da8-40a6-a4f7-473fe7520011,Namespace:kube-system,Attempt:0,}" Apr 16 04:58:24.229625 kubelet[2809]: E0416 04:58:24.229151 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:25.237854 kubelet[2809]: E0416 04:58:25.237598 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:25.508242 systemd-networkd[1530]: cilium_host: Link UP Apr 16 04:58:25.508599 systemd-networkd[1530]: cilium_net: Link UP Apr 16 04:58:25.509622 systemd-networkd[1530]: cilium_net: Gained carrier Apr 16 04:58:25.509889 systemd-networkd[1530]: cilium_host: Gained carrier Apr 16 04:58:25.608305 systemd-networkd[1530]: cilium_vxlan: Link UP Apr 16 04:58:25.608312 systemd-networkd[1530]: cilium_vxlan: Gained carrier Apr 16 04:58:25.647884 systemd-networkd[1530]: cilium_host: Gained IPv6LL Apr 16 04:58:25.829158 kernel: NET: Registered PF_ALG protocol family Apr 16 04:58:25.855553 systemd-networkd[1530]: cilium_net: Gained IPv6LL Apr 16 04:58:26.255941 kubelet[2809]: E0416 04:58:26.255537 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:26.465837 systemd-networkd[1530]: lxc_health: Link UP Apr 16 04:58:26.467225 systemd-networkd[1530]: lxc_health: Gained carrier Apr 16 04:58:27.013512 systemd-networkd[1530]: lxcff65129e256d: Link UP Apr 16 04:58:27.026156 kernel: eth0: renamed from tmpba325 Apr 16 04:58:27.032537 systemd-networkd[1530]: lxc36cdb5499053: Link UP Apr 16 04:58:27.034750 systemd-networkd[1530]: lxcff65129e256d: Gained carrier Apr 16 04:58:27.035296 kernel: eth0: renamed from tmp63118 Apr 16 04:58:27.035954 systemd-networkd[1530]: lxc36cdb5499053: Gained carrier Apr 16 04:58:27.519763 systemd-networkd[1530]: cilium_vxlan: Gained IPv6LL Apr 16 04:58:28.005365 kubelet[2809]: E0416 04:58:28.003947 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:28.015155 kubelet[2809]: I0416 04:58:28.015065 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wwfpc" podStartSLOduration=9.541909635 podStartE2EDuration="17.015050331s" podCreationTimestamp="2026-04-16 04:58:11 +0000 UTC" firstStartedPulling="2026-04-16 04:58:12.173091555 +0000 UTC m=+5.726627872" lastFinishedPulling="2026-04-16 04:58:19.646232248 +0000 UTC m=+13.199768568" observedRunningTime="2026-04-16 04:58:24.248254663 +0000 UTC m=+17.801790990" watchObservedRunningTime="2026-04-16 04:58:28.015050331 +0000 UTC m=+21.568586658" Apr 16 04:58:28.160478 systemd-networkd[1530]: lxc_health: Gained IPv6LL Apr 16 04:58:28.863893 systemd-networkd[1530]: lxcff65129e256d: Gained IPv6LL Apr 16 04:58:28.864413 systemd-networkd[1530]: lxc36cdb5499053: Gained IPv6LL Apr 16 04:58:30.772541 containerd[1586]: time="2026-04-16T04:58:30.772377925Z" level=info msg="connecting to shim 6311877f323cef6b2ef9be59900f53f01dcd2b88bee9db2defcf7c121ec686fe" address="unix:///run/containerd/s/2b7ada7cb2a62df236125c2f4df55f82226ec6584771245a06aefeee7946097c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:30.772541 containerd[1586]: time="2026-04-16T04:58:30.772529935Z" level=info msg="connecting to shim ba3252e7775a8b95adec9c71aecc8139c05e9478b664f8f1b42a43e533777e11" address="unix:///run/containerd/s/475c47d667e54964f667c3f9cc0d100844c01d72b382fc83760734f6cb82ad2a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:58:30.801815 systemd[1]: Started cri-containerd-6311877f323cef6b2ef9be59900f53f01dcd2b88bee9db2defcf7c121ec686fe.scope - libcontainer container 6311877f323cef6b2ef9be59900f53f01dcd2b88bee9db2defcf7c121ec686fe. Apr 16 04:58:30.806216 systemd[1]: Started cri-containerd-ba3252e7775a8b95adec9c71aecc8139c05e9478b664f8f1b42a43e533777e11.scope - libcontainer container ba3252e7775a8b95adec9c71aecc8139c05e9478b664f8f1b42a43e533777e11. Apr 16 04:58:30.817357 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:58:30.826473 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:58:30.869238 containerd[1586]: time="2026-04-16T04:58:30.868650825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rsxbd,Uid:a72df67a-2614-405b-b41e-c848003ef6da,Namespace:kube-system,Attempt:0,} returns sandbox id \"6311877f323cef6b2ef9be59900f53f01dcd2b88bee9db2defcf7c121ec686fe\"" Apr 16 04:58:30.869682 kubelet[2809]: E0416 04:58:30.869623 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:30.900051 containerd[1586]: time="2026-04-16T04:58:30.899815827Z" level=info msg="CreateContainer within sandbox \"6311877f323cef6b2ef9be59900f53f01dcd2b88bee9db2defcf7c121ec686fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 04:58:30.901350 containerd[1586]: time="2026-04-16T04:58:30.901303550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fp2qx,Uid:543f3db5-6da8-40a6-a4f7-473fe7520011,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba3252e7775a8b95adec9c71aecc8139c05e9478b664f8f1b42a43e533777e11\"" Apr 16 04:58:30.904929 kubelet[2809]: E0416 04:58:30.904906 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:30.923996 containerd[1586]: time="2026-04-16T04:58:30.923701007Z" level=info msg="CreateContainer within sandbox \"ba3252e7775a8b95adec9c71aecc8139c05e9478b664f8f1b42a43e533777e11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 04:58:30.953671 containerd[1586]: time="2026-04-16T04:58:30.951794958Z" level=info msg="Container 7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:30.976461 containerd[1586]: time="2026-04-16T04:58:30.975479670Z" level=info msg="Container 014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:58:30.980144 containerd[1586]: time="2026-04-16T04:58:30.979991096Z" level=info msg="CreateContainer within sandbox \"6311877f323cef6b2ef9be59900f53f01dcd2b88bee9db2defcf7c121ec686fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a\"" Apr 16 04:58:30.984967 containerd[1586]: time="2026-04-16T04:58:30.984830464Z" level=info msg="StartContainer for \"7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a\"" Apr 16 04:58:30.991736 containerd[1586]: time="2026-04-16T04:58:30.990803907Z" level=info msg="connecting to shim 7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a" address="unix:///run/containerd/s/2b7ada7cb2a62df236125c2f4df55f82226ec6584771245a06aefeee7946097c" protocol=ttrpc version=3 Apr 16 04:58:30.998659 containerd[1586]: time="2026-04-16T04:58:30.998503719Z" level=info msg="CreateContainer within sandbox \"ba3252e7775a8b95adec9c71aecc8139c05e9478b664f8f1b42a43e533777e11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5\"" Apr 16 04:58:31.007358 containerd[1586]: time="2026-04-16T04:58:31.004971513Z" level=info msg="StartContainer for \"014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5\"" Apr 16 04:58:31.015287 containerd[1586]: time="2026-04-16T04:58:31.015169949Z" level=info msg="connecting to shim 014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5" address="unix:///run/containerd/s/475c47d667e54964f667c3f9cc0d100844c01d72b382fc83760734f6cb82ad2a" protocol=ttrpc version=3 Apr 16 04:58:31.029326 systemd[1]: Started cri-containerd-7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a.scope - libcontainer container 7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a. Apr 16 04:58:31.034815 systemd[1]: Started cri-containerd-014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5.scope - libcontainer container 014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5. Apr 16 04:58:31.097051 containerd[1586]: time="2026-04-16T04:58:31.096907388Z" level=info msg="StartContainer for \"7c30432abac819642f5e869db80e7208f1402b38659323fcd820a573bd571f2a\" returns successfully" Apr 16 04:58:31.140445 containerd[1586]: time="2026-04-16T04:58:31.140365901Z" level=info msg="StartContainer for \"014c4b57141b4157c5ac66955cce84a94296cd7cbe4f3a2354b54fe4b48413e5\" returns successfully" Apr 16 04:58:31.286700 kubelet[2809]: E0416 04:58:31.285370 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:31.288574 kubelet[2809]: E0416 04:58:31.287461 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:31.309588 kubelet[2809]: I0416 04:58:31.309265 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fp2qx" podStartSLOduration=20.309164444 podStartE2EDuration="20.309164444s" podCreationTimestamp="2026-04-16 04:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:58:31.306552836 +0000 UTC m=+24.860089164" watchObservedRunningTime="2026-04-16 04:58:31.309164444 +0000 UTC m=+24.862700762" Apr 16 04:58:32.300326 kubelet[2809]: E0416 04:58:32.300158 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:32.300326 kubelet[2809]: E0416 04:58:32.300200 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:33.041386 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:54088.service - OpenSSH per-connection server daemon (10.0.0.1:54088). Apr 16 04:58:33.123366 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 54088 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:33.124859 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:33.144781 systemd-logind[1544]: New session 10 of user core. Apr 16 04:58:33.166816 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 04:58:33.280825 sshd[4156]: Connection closed by 10.0.0.1 port 54088 Apr 16 04:58:33.281844 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:33.286768 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:54088.service: Deactivated successfully. Apr 16 04:58:33.288518 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 04:58:33.289163 systemd-logind[1544]: Session 10 logged out. Waiting for processes to exit. Apr 16 04:58:33.290166 systemd-logind[1544]: Removed session 10. Apr 16 04:58:33.306517 kubelet[2809]: E0416 04:58:33.306044 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:33.306517 kubelet[2809]: E0416 04:58:33.306193 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:38.298249 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:49742.service - OpenSSH per-connection server daemon (10.0.0.1:49742). Apr 16 04:58:38.383694 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 49742 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:38.386585 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:38.392750 systemd-logind[1544]: New session 11 of user core. Apr 16 04:58:38.405640 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 04:58:38.524466 sshd[4173]: Connection closed by 10.0.0.1 port 49742 Apr 16 04:58:38.524629 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:38.530747 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:49742.service: Deactivated successfully. Apr 16 04:58:38.533005 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 04:58:38.533700 systemd-logind[1544]: Session 11 logged out. Waiting for processes to exit. Apr 16 04:58:38.534663 systemd-logind[1544]: Removed session 11. Apr 16 04:58:38.943345 kubelet[2809]: I0416 04:58:38.943183 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 04:58:38.944416 kubelet[2809]: E0416 04:58:38.944396 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:39.023431 kubelet[2809]: I0416 04:58:39.023073 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rsxbd" podStartSLOduration=28.023059035 podStartE2EDuration="28.023059035s" podCreationTimestamp="2026-04-16 04:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:58:31.375190215 +0000 UTC m=+24.928726550" watchObservedRunningTime="2026-04-16 04:58:39.023059035 +0000 UTC m=+32.576595361" Apr 16 04:58:39.407047 kubelet[2809]: E0416 04:58:39.406820 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:58:43.539829 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:49754.service - OpenSSH per-connection server daemon (10.0.0.1:49754). Apr 16 04:58:43.584874 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 49754 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:43.586226 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:43.591565 systemd-logind[1544]: New session 12 of user core. Apr 16 04:58:43.601282 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 04:58:43.663077 sshd[4193]: Connection closed by 10.0.0.1 port 49754 Apr 16 04:58:43.663344 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:43.670664 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:49754.service: Deactivated successfully. Apr 16 04:58:43.671882 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 04:58:43.672516 systemd-logind[1544]: Session 12 logged out. Waiting for processes to exit. Apr 16 04:58:43.677924 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:49756.service - OpenSSH per-connection server daemon (10.0.0.1:49756). Apr 16 04:58:43.679850 systemd-logind[1544]: Removed session 12. Apr 16 04:58:43.722088 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 49756 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:43.723031 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:43.727044 systemd-logind[1544]: New session 13 of user core. Apr 16 04:58:43.734267 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 04:58:43.828197 sshd[4211]: Connection closed by 10.0.0.1 port 49756 Apr 16 04:58:43.829192 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:43.838028 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:49756.service: Deactivated successfully. Apr 16 04:58:43.840477 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 04:58:43.841168 systemd-logind[1544]: Session 13 logged out. Waiting for processes to exit. Apr 16 04:58:43.845382 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:49768.service - OpenSSH per-connection server daemon (10.0.0.1:49768). Apr 16 04:58:43.848620 systemd-logind[1544]: Removed session 13. Apr 16 04:58:43.894698 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 49768 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:43.895712 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:43.899419 systemd-logind[1544]: New session 14 of user core. Apr 16 04:58:43.908272 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 04:58:43.963687 sshd[4225]: Connection closed by 10.0.0.1 port 49768 Apr 16 04:58:43.963963 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:43.966607 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:49768.service: Deactivated successfully. Apr 16 04:58:43.967832 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 04:58:43.968598 systemd-logind[1544]: Session 14 logged out. Waiting for processes to exit. Apr 16 04:58:43.969408 systemd-logind[1544]: Removed session 14. Apr 16 04:58:48.980211 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:56076.service - OpenSSH per-connection server daemon (10.0.0.1:56076). Apr 16 04:58:49.021586 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 56076 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:49.022477 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:49.025903 systemd-logind[1544]: New session 15 of user core. Apr 16 04:58:49.042296 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 04:58:49.104262 sshd[4242]: Connection closed by 10.0.0.1 port 56076 Apr 16 04:58:49.104609 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:49.107439 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:56076.service: Deactivated successfully. Apr 16 04:58:49.108921 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 04:58:49.109971 systemd-logind[1544]: Session 15 logged out. Waiting for processes to exit. Apr 16 04:58:49.110965 systemd-logind[1544]: Removed session 15. Apr 16 04:58:54.118847 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:56084.service - OpenSSH per-connection server daemon (10.0.0.1:56084). Apr 16 04:58:54.168488 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 56084 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:54.169394 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:54.173939 systemd-logind[1544]: New session 16 of user core. Apr 16 04:58:54.179535 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 04:58:54.251635 sshd[4258]: Connection closed by 10.0.0.1 port 56084 Apr 16 04:58:54.253429 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:54.266509 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:56084.service: Deactivated successfully. Apr 16 04:58:54.268098 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 04:58:54.268930 systemd-logind[1544]: Session 16 logged out. Waiting for processes to exit. Apr 16 04:58:54.271586 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:56086.service - OpenSSH per-connection server daemon (10.0.0.1:56086). Apr 16 04:58:54.271990 systemd-logind[1544]: Removed session 16. Apr 16 04:58:54.311463 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 56086 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:54.312583 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:54.316005 systemd-logind[1544]: New session 17 of user core. Apr 16 04:58:54.322287 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 04:58:54.457515 sshd[4274]: Connection closed by 10.0.0.1 port 56086 Apr 16 04:58:54.457729 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:54.466704 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:56086.service: Deactivated successfully. Apr 16 04:58:54.467886 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 04:58:54.468623 systemd-logind[1544]: Session 17 logged out. Waiting for processes to exit. Apr 16 04:58:54.471159 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:56100.service - OpenSSH per-connection server daemon (10.0.0.1:56100). Apr 16 04:58:54.471600 systemd-logind[1544]: Removed session 17. Apr 16 04:58:54.524367 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 56100 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:54.525293 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:54.528953 systemd-logind[1544]: New session 18 of user core. Apr 16 04:58:54.537340 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 04:58:54.876538 sshd[4288]: Connection closed by 10.0.0.1 port 56100 Apr 16 04:58:54.877079 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:54.895319 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:56100.service: Deactivated successfully. Apr 16 04:58:54.897565 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 04:58:54.898258 systemd-logind[1544]: Session 18 logged out. Waiting for processes to exit. Apr 16 04:58:54.900448 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:56106.service - OpenSSH per-connection server daemon (10.0.0.1:56106). Apr 16 04:58:54.901651 systemd-logind[1544]: Removed session 18. Apr 16 04:58:54.942011 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 56106 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:54.942943 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:54.946523 systemd-logind[1544]: New session 19 of user core. Apr 16 04:58:54.955285 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 04:58:55.094475 sshd[4308]: Connection closed by 10.0.0.1 port 56106 Apr 16 04:58:55.095554 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:55.105459 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:56106.service: Deactivated successfully. Apr 16 04:58:55.106973 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 04:58:55.107645 systemd-logind[1544]: Session 19 logged out. Waiting for processes to exit. Apr 16 04:58:55.113077 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Apr 16 04:58:55.114356 systemd-logind[1544]: Removed session 19. Apr 16 04:58:55.156371 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:58:55.157371 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:58:55.161005 systemd-logind[1544]: New session 20 of user core. Apr 16 04:58:55.170277 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 04:58:55.232529 sshd[4322]: Connection closed by 10.0.0.1 port 56118 Apr 16 04:58:55.232964 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Apr 16 04:58:55.235768 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:56118.service: Deactivated successfully. Apr 16 04:58:55.237060 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 04:58:55.237735 systemd-logind[1544]: Session 20 logged out. Waiting for processes to exit. Apr 16 04:58:55.238529 systemd-logind[1544]: Removed session 20. Apr 16 04:59:00.252335 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Apr 16 04:59:00.301619 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:59:00.302405 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:59:00.306971 systemd-logind[1544]: New session 21 of user core. Apr 16 04:59:00.317322 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 04:59:00.384343 sshd[4343]: Connection closed by 10.0.0.1 port 41158 Apr 16 04:59:00.384840 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Apr 16 04:59:00.389635 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:41158.service: Deactivated successfully. Apr 16 04:59:00.391242 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 04:59:00.391951 systemd-logind[1544]: Session 21 logged out. Waiting for processes to exit. Apr 16 04:59:00.392868 systemd-logind[1544]: Removed session 21. Apr 16 04:59:05.407184 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:54192.service - OpenSSH per-connection server daemon (10.0.0.1:54192). Apr 16 04:59:05.460269 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 54192 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:59:05.461344 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:59:05.465959 systemd-logind[1544]: New session 22 of user core. Apr 16 04:59:05.476189 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 04:59:05.547303 sshd[4361]: Connection closed by 10.0.0.1 port 54192 Apr 16 04:59:05.549324 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Apr 16 04:59:05.565331 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:54192.service: Deactivated successfully. Apr 16 04:59:05.566888 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 04:59:05.567514 systemd-logind[1544]: Session 22 logged out. Waiting for processes to exit. Apr 16 04:59:05.569565 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:54194.service - OpenSSH per-connection server daemon (10.0.0.1:54194). Apr 16 04:59:05.569998 systemd-logind[1544]: Removed session 22. Apr 16 04:59:05.642247 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 54194 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:59:05.643485 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:59:05.648591 systemd-logind[1544]: New session 23 of user core. Apr 16 04:59:05.662311 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 04:59:06.945766 containerd[1586]: time="2026-04-16T04:59:06.945000056Z" level=info msg="StopContainer for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" with timeout 30 (s)" Apr 16 04:59:06.960875 containerd[1586]: time="2026-04-16T04:59:06.960826660Z" level=info msg="Stop container \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" with signal terminated" Apr 16 04:59:06.979540 systemd[1]: cri-containerd-a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497.scope: Deactivated successfully. Apr 16 04:59:06.985667 containerd[1586]: time="2026-04-16T04:59:06.985538889Z" level=info msg="received container exit event container_id:\"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" id:\"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" pid:3397 exited_at:{seconds:1776315546 nanos:984569146}" Apr 16 04:59:06.999999 containerd[1586]: time="2026-04-16T04:59:06.999933524Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 04:59:07.002358 containerd[1586]: time="2026-04-16T04:59:07.002337430Z" level=info msg="StopContainer for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" with timeout 2 (s)" Apr 16 04:59:07.002944 containerd[1586]: time="2026-04-16T04:59:07.002927954Z" level=info msg="Stop container \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" with signal terminated" Apr 16 04:59:07.012238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497-rootfs.mount: Deactivated successfully. Apr 16 04:59:07.017612 systemd-networkd[1530]: lxc_health: Link DOWN Apr 16 04:59:07.018473 systemd-networkd[1530]: lxc_health: Lost carrier Apr 16 04:59:07.040228 containerd[1586]: time="2026-04-16T04:59:07.040191004Z" level=info msg="StopContainer for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" returns successfully" Apr 16 04:59:07.040942 systemd[1]: cri-containerd-87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60.scope: Deactivated successfully. Apr 16 04:59:07.041211 systemd[1]: cri-containerd-87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60.scope: Consumed 7.242s CPU time, 122.1M memory peak, 400K read from disk, 13.3M written to disk. Apr 16 04:59:07.052157 containerd[1586]: time="2026-04-16T04:59:07.051896940Z" level=info msg="received container exit event container_id:\"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" id:\"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" pid:3469 exited_at:{seconds:1776315547 nanos:43471101}" Apr 16 04:59:07.056274 containerd[1586]: time="2026-04-16T04:59:07.054086586Z" level=info msg="StopPodSandbox for \"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\"" Apr 16 04:59:07.063345 containerd[1586]: time="2026-04-16T04:59:07.063298589Z" level=info msg="Container to stop \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:59:07.071900 systemd[1]: cri-containerd-43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa.scope: Deactivated successfully. Apr 16 04:59:07.074587 containerd[1586]: time="2026-04-16T04:59:07.074549743Z" level=info msg="received sandbox exit event container_id:\"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" id:\"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" exit_status:137 exited_at:{seconds:1776315547 nanos:72593237}" monitor_name=podsandbox Apr 16 04:59:07.082165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60-rootfs.mount: Deactivated successfully. Apr 16 04:59:07.095742 containerd[1586]: time="2026-04-16T04:59:07.095688020Z" level=info msg="StopContainer for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" returns successfully" Apr 16 04:59:07.096764 containerd[1586]: time="2026-04-16T04:59:07.096552855Z" level=info msg="StopPodSandbox for \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\"" Apr 16 04:59:07.096764 containerd[1586]: time="2026-04-16T04:59:07.096603727Z" level=info msg="Container to stop \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:59:07.096764 containerd[1586]: time="2026-04-16T04:59:07.096611720Z" level=info msg="Container to stop \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:59:07.096764 containerd[1586]: time="2026-04-16T04:59:07.096617579Z" level=info msg="Container to stop \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:59:07.096764 containerd[1586]: time="2026-04-16T04:59:07.096623169Z" level=info msg="Container to stop \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:59:07.096764 containerd[1586]: time="2026-04-16T04:59:07.096629083Z" level=info msg="Container to stop \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 16 04:59:07.097975 kubelet[2809]: E0416 04:59:07.097898 2809 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:59:07.106095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa-rootfs.mount: Deactivated successfully. Apr 16 04:59:07.111557 systemd[1]: cri-containerd-b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd.scope: Deactivated successfully. Apr 16 04:59:07.112778 containerd[1586]: time="2026-04-16T04:59:07.112208128Z" level=info msg="received sandbox exit event container_id:\"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" id:\"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" exit_status:137 exited_at:{seconds:1776315547 nanos:108825446}" monitor_name=podsandbox Apr 16 04:59:07.116497 containerd[1586]: time="2026-04-16T04:59:07.115925673Z" level=info msg="shim disconnected" id=43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa namespace=k8s.io Apr 16 04:59:07.116497 containerd[1586]: time="2026-04-16T04:59:07.115949036Z" level=warning msg="cleaning up after shim disconnected" id=43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa namespace=k8s.io Apr 16 04:59:07.123362 containerd[1586]: time="2026-04-16T04:59:07.115955052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:59:07.137627 containerd[1586]: time="2026-04-16T04:59:07.137585423Z" level=info msg="TearDown network for sandbox \"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" successfully" Apr 16 04:59:07.137627 containerd[1586]: time="2026-04-16T04:59:07.137619906Z" level=info msg="StopPodSandbox for \"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" returns successfully" Apr 16 04:59:07.137730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa-shm.mount: Deactivated successfully. Apr 16 04:59:07.142555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd-rootfs.mount: Deactivated successfully. Apr 16 04:59:07.152629 containerd[1586]: time="2026-04-16T04:59:07.152544374Z" level=info msg="shim disconnected" id=b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd namespace=k8s.io Apr 16 04:59:07.152629 containerd[1586]: time="2026-04-16T04:59:07.152604581Z" level=warning msg="cleaning up after shim disconnected" id=b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd namespace=k8s.io Apr 16 04:59:07.153283 containerd[1586]: time="2026-04-16T04:59:07.152610552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 04:59:07.154685 containerd[1586]: time="2026-04-16T04:59:07.154557974Z" level=info msg="received sandbox container exit event sandbox_id:\"43c9a450ce4de4f07f6d81e7e598b6c88351cdbfe3d39272ec184000ba558aaa\" exit_status:137 exited_at:{seconds:1776315547 nanos:72593237}" monitor_name=criService Apr 16 04:59:07.169165 containerd[1586]: time="2026-04-16T04:59:07.169003445Z" level=info msg="received sandbox container exit event sandbox_id:\"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" exit_status:137 exited_at:{seconds:1776315547 nanos:108825446}" monitor_name=criService Apr 16 04:59:07.169536 containerd[1586]: time="2026-04-16T04:59:07.169417047Z" level=info msg="TearDown network for sandbox \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" successfully" Apr 16 04:59:07.169536 containerd[1586]: time="2026-04-16T04:59:07.169438080Z" level=info msg="StopPodSandbox for \"b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd\" returns successfully" Apr 16 04:59:07.335282 kubelet[2809]: I0416 04:59:07.335028 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-xtables-lock\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335282 kubelet[2809]: I0416 04:59:07.335064 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzvsm\" (UniqueName: \"kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-kube-api-access-qzvsm\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335282 kubelet[2809]: I0416 04:59:07.335077 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-kernel\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335282 kubelet[2809]: I0416 04:59:07.335088 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-etc-cni-netd\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335282 kubelet[2809]: I0416 04:59:07.335099 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cni-path\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335282 kubelet[2809]: I0416 04:59:07.335136 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lc7w\" (UniqueName: \"kubernetes.io/projected/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-kube-api-access-5lc7w\") pod \"c87fdd58-6e76-4e5f-a092-b9a7291a0eef\" (UID: \"c87fdd58-6e76-4e5f-a092-b9a7291a0eef\") " Apr 16 04:59:07.335571 kubelet[2809]: I0416 04:59:07.335149 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-net\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335571 kubelet[2809]: I0416 04:59:07.335161 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.335571 kubelet[2809]: I0416 04:59:07.335196 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.335571 kubelet[2809]: I0416 04:59:07.335210 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.335571 kubelet[2809]: I0416 04:59:07.335173 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.335690 kubelet[2809]: I0416 04:59:07.335222 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.335690 kubelet[2809]: I0416 04:59:07.335234 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-bpf-maps\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335690 kubelet[2809]: I0416 04:59:07.335245 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.335690 kubelet[2809]: I0416 04:59:07.335256 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-config-path\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335690 kubelet[2809]: I0416 04:59:07.335267 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hubble-tls\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335690 kubelet[2809]: I0416 04:59:07.335276 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hostproc\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335818 kubelet[2809]: I0416 04:59:07.335290 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-run\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335818 kubelet[2809]: I0416 04:59:07.335302 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-cgroup\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335818 kubelet[2809]: I0416 04:59:07.335314 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-clustermesh-secrets\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335818 kubelet[2809]: I0416 04:59:07.335326 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-cilium-config-path\") pod \"c87fdd58-6e76-4e5f-a092-b9a7291a0eef\" (UID: \"c87fdd58-6e76-4e5f-a092-b9a7291a0eef\") " Apr 16 04:59:07.335818 kubelet[2809]: I0416 04:59:07.335336 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-lib-modules\") pod \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\" (UID: \"bde4ab81-4b19-4648-9e7a-bfa712df5b4d\") " Apr 16 04:59:07.335818 kubelet[2809]: I0416 04:59:07.335362 2809 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.335937 kubelet[2809]: I0416 04:59:07.335368 2809 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.335937 kubelet[2809]: I0416 04:59:07.335374 2809 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.335937 kubelet[2809]: I0416 04:59:07.335381 2809 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.335937 kubelet[2809]: I0416 04:59:07.335386 2809 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.335937 kubelet[2809]: I0416 04:59:07.335393 2809 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.335937 kubelet[2809]: I0416 04:59:07.335409 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.337223 kubelet[2809]: I0416 04:59:07.337190 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 04:59:07.337395 kubelet[2809]: I0416 04:59:07.337376 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.337619 kubelet[2809]: I0416 04:59:07.337444 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.337619 kubelet[2809]: I0416 04:59:07.337457 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 16 04:59:07.338618 kubelet[2809]: I0416 04:59:07.338567 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-kube-api-access-5lc7w" (OuterVolumeSpecName: "kube-api-access-5lc7w") pod "c87fdd58-6e76-4e5f-a092-b9a7291a0eef" (UID: "c87fdd58-6e76-4e5f-a092-b9a7291a0eef"). InnerVolumeSpecName "kube-api-access-5lc7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:59:07.339531 kubelet[2809]: I0416 04:59:07.339496 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-kube-api-access-qzvsm" (OuterVolumeSpecName: "kube-api-access-qzvsm") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "kube-api-access-qzvsm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:59:07.339593 kubelet[2809]: I0416 04:59:07.339571 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 04:59:07.339651 kubelet[2809]: I0416 04:59:07.339636 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bde4ab81-4b19-4648-9e7a-bfa712df5b4d" (UID: "bde4ab81-4b19-4648-9e7a-bfa712df5b4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 04:59:07.340006 kubelet[2809]: I0416 04:59:07.339969 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c87fdd58-6e76-4e5f-a092-b9a7291a0eef" (UID: "c87fdd58-6e76-4e5f-a092-b9a7291a0eef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 04:59:07.436478 kubelet[2809]: I0416 04:59:07.436302 2809 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzvsm\" (UniqueName: \"kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-kube-api-access-qzvsm\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437164 2809 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lc7w\" (UniqueName: \"kubernetes.io/projected/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-kube-api-access-5lc7w\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437176 2809 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437198 2809 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437204 2809 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437238 2809 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437243 2809 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437249 2809 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437452 kubelet[2809]: I0416 04:59:07.437255 2809 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c87fdd58-6e76-4e5f-a092-b9a7291a0eef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.437594 kubelet[2809]: I0416 04:59:07.437261 2809 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bde4ab81-4b19-4648-9e7a-bfa712df5b4d-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 16 04:59:07.610061 kubelet[2809]: I0416 04:59:07.610016 2809 scope.go:117] "RemoveContainer" containerID="87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60" Apr 16 04:59:07.612758 containerd[1586]: time="2026-04-16T04:59:07.612623772Z" level=info msg="RemoveContainer for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\"" Apr 16 04:59:07.614153 systemd[1]: Removed slice kubepods-burstable-podbde4ab81_4b19_4648_9e7a_bfa712df5b4d.slice - libcontainer container kubepods-burstable-podbde4ab81_4b19_4648_9e7a_bfa712df5b4d.slice. Apr 16 04:59:07.614253 systemd[1]: kubepods-burstable-podbde4ab81_4b19_4648_9e7a_bfa712df5b4d.slice: Consumed 7.343s CPU time, 122.5M memory peak, 408K read from disk, 13.3M written to disk. Apr 16 04:59:07.617538 systemd[1]: Removed slice kubepods-besteffort-podc87fdd58_6e76_4e5f_a092_b9a7291a0eef.slice - libcontainer container kubepods-besteffort-podc87fdd58_6e76_4e5f_a092_b9a7291a0eef.slice. Apr 16 04:59:07.619062 containerd[1586]: time="2026-04-16T04:59:07.619013400Z" level=info msg="RemoveContainer for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" returns successfully" Apr 16 04:59:07.619382 kubelet[2809]: I0416 04:59:07.619340 2809 scope.go:117] "RemoveContainer" containerID="209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48" Apr 16 04:59:07.620611 containerd[1586]: time="2026-04-16T04:59:07.620586376Z" level=info msg="RemoveContainer for \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\"" Apr 16 04:59:07.633315 containerd[1586]: time="2026-04-16T04:59:07.633232647Z" level=info msg="RemoveContainer for \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\" returns successfully" Apr 16 04:59:07.633744 kubelet[2809]: I0416 04:59:07.633686 2809 scope.go:117] "RemoveContainer" containerID="f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd" Apr 16 04:59:07.635924 containerd[1586]: time="2026-04-16T04:59:07.635857578Z" level=info msg="RemoveContainer for \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\"" Apr 16 04:59:07.639568 containerd[1586]: time="2026-04-16T04:59:07.639486655Z" level=info msg="RemoveContainer for \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\" returns successfully" Apr 16 04:59:07.639688 kubelet[2809]: I0416 04:59:07.639656 2809 scope.go:117] "RemoveContainer" containerID="ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7" Apr 16 04:59:07.645036 containerd[1586]: time="2026-04-16T04:59:07.645009106Z" level=info msg="RemoveContainer for \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\"" Apr 16 04:59:07.648151 containerd[1586]: time="2026-04-16T04:59:07.648094733Z" level=info msg="RemoveContainer for \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\" returns successfully" Apr 16 04:59:07.648381 kubelet[2809]: I0416 04:59:07.648311 2809 scope.go:117] "RemoveContainer" containerID="9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c" Apr 16 04:59:07.649398 containerd[1586]: time="2026-04-16T04:59:07.649362566Z" level=info msg="RemoveContainer for \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\"" Apr 16 04:59:07.654748 containerd[1586]: time="2026-04-16T04:59:07.654724025Z" level=info msg="RemoveContainer for \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\" returns successfully" Apr 16 04:59:07.654905 kubelet[2809]: I0416 04:59:07.654882 2809 scope.go:117] "RemoveContainer" containerID="87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60" Apr 16 04:59:07.655078 containerd[1586]: time="2026-04-16T04:59:07.655032249Z" level=error msg="ContainerStatus for \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\": not found" Apr 16 04:59:07.655263 kubelet[2809]: E0416 04:59:07.655239 2809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\": not found" containerID="87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60" Apr 16 04:59:07.655337 kubelet[2809]: I0416 04:59:07.655268 2809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60"} err="failed to get container status \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\": rpc error: code = NotFound desc = an error occurred when try to find container \"87d8a54d37e37b7ca6d757e75ea72c5a1de8e9cdd857a0f39b655815b065bc60\": not found" Apr 16 04:59:07.655337 kubelet[2809]: I0416 04:59:07.655293 2809 scope.go:117] "RemoveContainer" containerID="209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48" Apr 16 04:59:07.655452 containerd[1586]: time="2026-04-16T04:59:07.655425839Z" level=error msg="ContainerStatus for \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\": not found" Apr 16 04:59:07.655534 kubelet[2809]: E0416 04:59:07.655517 2809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\": not found" containerID="209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48" Apr 16 04:59:07.655560 kubelet[2809]: I0416 04:59:07.655540 2809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48"} err="failed to get container status \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\": rpc error: code = NotFound desc = an error occurred when try to find container \"209d5669f241ab49b5df9c56855f98a03f5c5086fa46af7f385a39905f486d48\": not found" Apr 16 04:59:07.655560 kubelet[2809]: I0416 04:59:07.655552 2809 scope.go:117] "RemoveContainer" containerID="f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd" Apr 16 04:59:07.655760 containerd[1586]: time="2026-04-16T04:59:07.655709585Z" level=error msg="ContainerStatus for \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\": not found" Apr 16 04:59:07.655822 kubelet[2809]: E0416 04:59:07.655802 2809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\": not found" containerID="f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd" Apr 16 04:59:07.655843 kubelet[2809]: I0416 04:59:07.655823 2809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd"} err="failed to get container status \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"f02f2899dea2f5fa6b3db3e87f137005d67b1204e6d959bc80bcda05d62518cd\": not found" Apr 16 04:59:07.655843 kubelet[2809]: I0416 04:59:07.655834 2809 scope.go:117] "RemoveContainer" containerID="ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7" Apr 16 04:59:07.655979 containerd[1586]: time="2026-04-16T04:59:07.655955435Z" level=error msg="ContainerStatus for \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\": not found" Apr 16 04:59:07.656129 kubelet[2809]: E0416 04:59:07.656060 2809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\": not found" containerID="ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7" Apr 16 04:59:07.656129 kubelet[2809]: I0416 04:59:07.656076 2809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7"} err="failed to get container status \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea39fb6b78c620fcdf27433f8c20d64606f5f6f58e0c15358294973624a979b7\": not found" Apr 16 04:59:07.656129 kubelet[2809]: I0416 04:59:07.656087 2809 scope.go:117] "RemoveContainer" containerID="9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c" Apr 16 04:59:07.656349 containerd[1586]: time="2026-04-16T04:59:07.656324643Z" level=error msg="ContainerStatus for \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\": not found" Apr 16 04:59:07.656436 kubelet[2809]: E0416 04:59:07.656421 2809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\": not found" containerID="9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c" Apr 16 04:59:07.656512 kubelet[2809]: I0416 04:59:07.656437 2809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c"} err="failed to get container status \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f61e0da78b72afbe814bec4a390ed7907e502089c67fcd9c501c3c50a86727c\": not found" Apr 16 04:59:07.656512 kubelet[2809]: I0416 04:59:07.656446 2809 scope.go:117] "RemoveContainer" containerID="a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497" Apr 16 04:59:07.657491 containerd[1586]: time="2026-04-16T04:59:07.657450170Z" level=info msg="RemoveContainer for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\"" Apr 16 04:59:07.659810 containerd[1586]: time="2026-04-16T04:59:07.659779822Z" level=info msg="RemoveContainer for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" returns successfully" Apr 16 04:59:07.659936 kubelet[2809]: I0416 04:59:07.659919 2809 scope.go:117] "RemoveContainer" containerID="a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497" Apr 16 04:59:07.660247 containerd[1586]: time="2026-04-16T04:59:07.660226463Z" level=error msg="ContainerStatus for \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\": not found" Apr 16 04:59:07.660313 kubelet[2809]: E0416 04:59:07.660296 2809 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\": not found" containerID="a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497" Apr 16 04:59:07.660334 kubelet[2809]: I0416 04:59:07.660317 2809 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497"} err="failed to get container status \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8a9674b0fe9b08a0b7d5d080f640a0f48c471c27920a208f133e3b8c19f5497\": not found" Apr 16 04:59:08.013041 systemd[1]: var-lib-kubelet-pods-c87fdd58\x2d6e76\x2d4e5f\x2da092\x2db9a7291a0eef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5lc7w.mount: Deactivated successfully. Apr 16 04:59:08.013223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8b51683827526a4ed40a885665b731d635348982c40b14a1d0b731d317b8ecd-shm.mount: Deactivated successfully. Apr 16 04:59:08.013332 systemd[1]: var-lib-kubelet-pods-bde4ab81\x2d4b19\x2d4648\x2d9e7a\x2dbfa712df5b4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqzvsm.mount: Deactivated successfully. Apr 16 04:59:08.013381 systemd[1]: var-lib-kubelet-pods-bde4ab81\x2d4b19\x2d4648\x2d9e7a\x2dbfa712df5b4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 16 04:59:08.013426 systemd[1]: var-lib-kubelet-pods-bde4ab81\x2d4b19\x2d4648\x2d9e7a\x2dbfa712df5b4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 16 04:59:08.100569 kubelet[2809]: I0416 04:59:08.100282 2809 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-16T04:59:08Z","lastTransitionTime":"2026-04-16T04:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 16 04:59:08.889632 kubelet[2809]: I0416 04:59:08.889473 2809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde4ab81-4b19-4648-9e7a-bfa712df5b4d" path="/var/lib/kubelet/pods/bde4ab81-4b19-4648-9e7a-bfa712df5b4d/volumes" Apr 16 04:59:08.890360 kubelet[2809]: I0416 04:59:08.890330 2809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c87fdd58-6e76-4e5f-a092-b9a7291a0eef" path="/var/lib/kubelet/pods/c87fdd58-6e76-4e5f-a092-b9a7291a0eef/volumes" Apr 16 04:59:08.914783 sshd[4378]: Connection closed by 10.0.0.1 port 54194 Apr 16 04:59:08.917563 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Apr 16 04:59:08.923847 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:54194.service: Deactivated successfully. Apr 16 04:59:08.927491 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 04:59:08.930361 systemd-logind[1544]: Session 23 logged out. Waiting for processes to exit. Apr 16 04:59:08.933181 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:54210.service - OpenSSH per-connection server daemon (10.0.0.1:54210). Apr 16 04:59:08.936205 systemd-logind[1544]: Removed session 23. Apr 16 04:59:08.995704 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:59:08.997260 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:59:09.002649 systemd-logind[1544]: New session 24 of user core. Apr 16 04:59:09.018280 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 04:59:09.527277 sshd[4530]: Connection closed by 10.0.0.1 port 54210 Apr 16 04:59:09.529086 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Apr 16 04:59:09.541941 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:54210.service: Deactivated successfully. Apr 16 04:59:09.544992 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 04:59:09.555218 systemd-logind[1544]: Session 24 logged out. Waiting for processes to exit. Apr 16 04:59:09.568527 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:54218.service - OpenSSH per-connection server daemon (10.0.0.1:54218). Apr 16 04:59:09.572312 systemd-logind[1544]: Removed session 24. Apr 16 04:59:09.585811 systemd[1]: Created slice kubepods-burstable-pod11672bfb_4621_40b6_9dc0_8d88041c555b.slice - libcontainer container kubepods-burstable-pod11672bfb_4621_40b6_9dc0_8d88041c555b.slice. Apr 16 04:59:09.614651 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 54218 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:59:09.615487 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:59:09.618914 systemd-logind[1544]: New session 25 of user core. Apr 16 04:59:09.627286 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 04:59:09.634703 sshd[4545]: Connection closed by 10.0.0.1 port 54218 Apr 16 04:59:09.634917 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Apr 16 04:59:09.638096 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:54218.service: Deactivated successfully. Apr 16 04:59:09.639452 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 04:59:09.640313 systemd-logind[1544]: Session 25 logged out. Waiting for processes to exit. Apr 16 04:59:09.641793 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:54234.service - OpenSSH per-connection server daemon (10.0.0.1:54234). Apr 16 04:59:09.643504 systemd-logind[1544]: Removed session 25. Apr 16 04:59:09.665727 kubelet[2809]: I0416 04:59:09.665692 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11672bfb-4621-40b6-9dc0-8d88041c555b-hubble-tls\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.665727 kubelet[2809]: I0416 04:59:09.665725 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11672bfb-4621-40b6-9dc0-8d88041c555b-cilium-ipsec-secrets\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.665925 kubelet[2809]: I0416 04:59:09.665740 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7cvc\" (UniqueName: \"kubernetes.io/projected/11672bfb-4621-40b6-9dc0-8d88041c555b-kube-api-access-k7cvc\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.665925 kubelet[2809]: I0416 04:59:09.665752 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-cni-path\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.665925 kubelet[2809]: I0416 04:59:09.665764 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11672bfb-4621-40b6-9dc0-8d88041c555b-cilium-config-path\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.665925 kubelet[2809]: I0416 04:59:09.665801 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-host-proc-sys-net\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.665925 kubelet[2809]: I0416 04:59:09.665866 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-host-proc-sys-kernel\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666005 kubelet[2809]: I0416 04:59:09.665880 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-cilium-run\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666005 kubelet[2809]: I0416 04:59:09.665924 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-cilium-cgroup\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666005 kubelet[2809]: I0416 04:59:09.665937 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-bpf-maps\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666005 kubelet[2809]: I0416 04:59:09.665947 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-hostproc\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666005 kubelet[2809]: I0416 04:59:09.665959 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-etc-cni-netd\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666005 kubelet[2809]: I0416 04:59:09.665979 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-lib-modules\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666095 kubelet[2809]: I0416 04:59:09.666014 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11672bfb-4621-40b6-9dc0-8d88041c555b-xtables-lock\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.666095 kubelet[2809]: I0416 04:59:09.666027 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11672bfb-4621-40b6-9dc0-8d88041c555b-clustermesh-secrets\") pod \"cilium-cc6zs\" (UID: \"11672bfb-4621-40b6-9dc0-8d88041c555b\") " pod="kube-system/cilium-cc6zs" Apr 16 04:59:09.685717 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 54234 ssh2: RSA SHA256:IiTfK2rD8LSHXggGFdyxto9bXxmDCS3DeyOSiMga61s Apr 16 04:59:09.687354 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:59:09.692809 systemd-logind[1544]: New session 26 of user core. Apr 16 04:59:09.703358 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 04:59:09.896852 kubelet[2809]: E0416 04:59:09.896342 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:09.900993 containerd[1586]: time="2026-04-16T04:59:09.900949229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cc6zs,Uid:11672bfb-4621-40b6-9dc0-8d88041c555b,Namespace:kube-system,Attempt:0,}" Apr 16 04:59:09.925151 containerd[1586]: time="2026-04-16T04:59:09.924543730Z" level=info msg="connecting to shim 18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326" address="unix:///run/containerd/s/077a893d37eb04fb0d45fcec0d49165e32849560e57adf870bc85f6a2f84cb9f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:59:09.963287 systemd[1]: Started cri-containerd-18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326.scope - libcontainer container 18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326. Apr 16 04:59:09.994057 containerd[1586]: time="2026-04-16T04:59:09.993863891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cc6zs,Uid:11672bfb-4621-40b6-9dc0-8d88041c555b,Namespace:kube-system,Attempt:0,} returns sandbox id \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\"" Apr 16 04:59:09.997581 kubelet[2809]: E0416 04:59:09.997538 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:10.007335 containerd[1586]: time="2026-04-16T04:59:10.007224656Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 16 04:59:10.024022 containerd[1586]: time="2026-04-16T04:59:10.023894227Z" level=info msg="Container e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:59:10.035614 containerd[1586]: time="2026-04-16T04:59:10.035569085Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6\"" Apr 16 04:59:10.037503 containerd[1586]: time="2026-04-16T04:59:10.037479138Z" level=info msg="StartContainer for \"e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6\"" Apr 16 04:59:10.038061 containerd[1586]: time="2026-04-16T04:59:10.038039564Z" level=info msg="connecting to shim e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6" address="unix:///run/containerd/s/077a893d37eb04fb0d45fcec0d49165e32849560e57adf870bc85f6a2f84cb9f" protocol=ttrpc version=3 Apr 16 04:59:10.060302 systemd[1]: Started cri-containerd-e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6.scope - libcontainer container e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6. Apr 16 04:59:10.083650 containerd[1586]: time="2026-04-16T04:59:10.083618432Z" level=info msg="StartContainer for \"e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6\" returns successfully" Apr 16 04:59:10.089491 systemd[1]: cri-containerd-e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6.scope: Deactivated successfully. Apr 16 04:59:10.090404 containerd[1586]: time="2026-04-16T04:59:10.090367247Z" level=info msg="received container exit event container_id:\"e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6\" id:\"e220fcc700d336519c8d44d498d65ace3558af3ff760dd5c332ce10089f02cf6\" pid:4627 exited_at:{seconds:1776315550 nanos:90052555}" Apr 16 04:59:10.638703 kubelet[2809]: E0416 04:59:10.638606 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:10.665094 containerd[1586]: time="2026-04-16T04:59:10.664954055Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 16 04:59:10.673338 containerd[1586]: time="2026-04-16T04:59:10.673285309Z" level=info msg="Container 0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:59:10.683503 containerd[1586]: time="2026-04-16T04:59:10.683447775Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec\"" Apr 16 04:59:10.684582 containerd[1586]: time="2026-04-16T04:59:10.684553134Z" level=info msg="StartContainer for \"0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec\"" Apr 16 04:59:10.685286 containerd[1586]: time="2026-04-16T04:59:10.685258718Z" level=info msg="connecting to shim 0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec" address="unix:///run/containerd/s/077a893d37eb04fb0d45fcec0d49165e32849560e57adf870bc85f6a2f84cb9f" protocol=ttrpc version=3 Apr 16 04:59:10.723285 systemd[1]: Started cri-containerd-0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec.scope - libcontainer container 0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec. Apr 16 04:59:10.771984 containerd[1586]: time="2026-04-16T04:59:10.771907103Z" level=info msg="StartContainer for \"0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec\" returns successfully" Apr 16 04:59:10.777857 systemd[1]: cri-containerd-0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec.scope: Deactivated successfully. Apr 16 04:59:10.779550 containerd[1586]: time="2026-04-16T04:59:10.778905212Z" level=info msg="received container exit event container_id:\"0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec\" id:\"0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec\" pid:4671 exited_at:{seconds:1776315550 nanos:777424000}" Apr 16 04:59:10.801694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0609d383f68d05938d8fcdb158153ba730975a8574b37a46d90014a703ccd4ec-rootfs.mount: Deactivated successfully. Apr 16 04:59:11.645700 kubelet[2809]: E0416 04:59:11.645576 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:11.649744 containerd[1586]: time="2026-04-16T04:59:11.649701592Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 16 04:59:11.672960 containerd[1586]: time="2026-04-16T04:59:11.672568011Z" level=info msg="Container 0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:59:11.686336 containerd[1586]: time="2026-04-16T04:59:11.686272186Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b\"" Apr 16 04:59:11.696019 containerd[1586]: time="2026-04-16T04:59:11.695856526Z" level=info msg="StartContainer for \"0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b\"" Apr 16 04:59:11.702397 containerd[1586]: time="2026-04-16T04:59:11.702291144Z" level=info msg="connecting to shim 0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b" address="unix:///run/containerd/s/077a893d37eb04fb0d45fcec0d49165e32849560e57adf870bc85f6a2f84cb9f" protocol=ttrpc version=3 Apr 16 04:59:11.726281 systemd[1]: Started cri-containerd-0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b.scope - libcontainer container 0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b. Apr 16 04:59:11.791375 containerd[1586]: time="2026-04-16T04:59:11.791333001Z" level=info msg="StartContainer for \"0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b\" returns successfully" Apr 16 04:59:11.792510 systemd[1]: cri-containerd-0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b.scope: Deactivated successfully. Apr 16 04:59:11.801158 containerd[1586]: time="2026-04-16T04:59:11.798571555Z" level=info msg="received container exit event container_id:\"0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b\" id:\"0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b\" pid:4716 exited_at:{seconds:1776315551 nanos:794603846}" Apr 16 04:59:11.821148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c34054e49c35e3db416887860d1234b9f6844cebaf080daa5c8736ad0a1528b-rootfs.mount: Deactivated successfully. Apr 16 04:59:12.100769 kubelet[2809]: E0416 04:59:12.100721 2809 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:59:12.653827 kubelet[2809]: E0416 04:59:12.653801 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:12.663008 containerd[1586]: time="2026-04-16T04:59:12.662912527Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 16 04:59:12.674294 containerd[1586]: time="2026-04-16T04:59:12.674232151Z" level=info msg="Container 65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:59:12.674405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939914005.mount: Deactivated successfully. Apr 16 04:59:12.680969 containerd[1586]: time="2026-04-16T04:59:12.680922160Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42\"" Apr 16 04:59:12.681380 containerd[1586]: time="2026-04-16T04:59:12.681351338Z" level=info msg="StartContainer for \"65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42\"" Apr 16 04:59:12.682033 containerd[1586]: time="2026-04-16T04:59:12.681987692Z" level=info msg="connecting to shim 65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42" address="unix:///run/containerd/s/077a893d37eb04fb0d45fcec0d49165e32849560e57adf870bc85f6a2f84cb9f" protocol=ttrpc version=3 Apr 16 04:59:12.698328 systemd[1]: Started cri-containerd-65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42.scope - libcontainer container 65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42. Apr 16 04:59:12.739945 systemd[1]: cri-containerd-65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42.scope: Deactivated successfully. Apr 16 04:59:12.742342 containerd[1586]: time="2026-04-16T04:59:12.742300488Z" level=info msg="received container exit event container_id:\"65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42\" id:\"65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42\" pid:4757 exited_at:{seconds:1776315552 nanos:740168527}" Apr 16 04:59:12.743376 containerd[1586]: time="2026-04-16T04:59:12.743356998Z" level=info msg="StartContainer for \"65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42\" returns successfully" Apr 16 04:59:12.778019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65f37827deaeb1a06ea178ee2277300082e0a24706f5564ba75edfccb519df42-rootfs.mount: Deactivated successfully. Apr 16 04:59:13.665899 kubelet[2809]: E0416 04:59:13.665794 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:13.670366 containerd[1586]: time="2026-04-16T04:59:13.670303010Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 16 04:59:13.683405 containerd[1586]: time="2026-04-16T04:59:13.683310283Z" level=info msg="Container 3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:59:13.690370 containerd[1586]: time="2026-04-16T04:59:13.690316115Z" level=info msg="CreateContainer within sandbox \"18065b05c06ab0b9118b57bd17f855c8cb3ff67d57f7837d8d521af1e1aa1326\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343\"" Apr 16 04:59:13.691531 containerd[1586]: time="2026-04-16T04:59:13.690837242Z" level=info msg="StartContainer for \"3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343\"" Apr 16 04:59:13.691531 containerd[1586]: time="2026-04-16T04:59:13.691478218Z" level=info msg="connecting to shim 3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343" address="unix:///run/containerd/s/077a893d37eb04fb0d45fcec0d49165e32849560e57adf870bc85f6a2f84cb9f" protocol=ttrpc version=3 Apr 16 04:59:13.707264 systemd[1]: Started cri-containerd-3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343.scope - libcontainer container 3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343. Apr 16 04:59:13.751840 containerd[1586]: time="2026-04-16T04:59:13.751810020Z" level=info msg="StartContainer for \"3b8a94cadcd3416b9c91cc53dbca9d6960ad8fc39d3da6173360ab2634b89343\" returns successfully" Apr 16 04:59:14.006147 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 16 04:59:14.673963 kubelet[2809]: E0416 04:59:14.673854 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:14.698249 kubelet[2809]: I0416 04:59:14.697848 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cc6zs" podStartSLOduration=5.697833212 podStartE2EDuration="5.697833212s" podCreationTimestamp="2026-04-16 04:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 04:59:14.697407032 +0000 UTC m=+68.250943352" watchObservedRunningTime="2026-04-16 04:59:14.697833212 +0000 UTC m=+68.251369587" Apr 16 04:59:15.894233 kubelet[2809]: E0416 04:59:15.894063 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:16.814174 systemd-networkd[1530]: lxc_health: Link UP Apr 16 04:59:16.820273 systemd-networkd[1530]: lxc_health: Gained carrier Apr 16 04:59:17.898068 kubelet[2809]: E0416 04:59:17.897946 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:18.144467 systemd-networkd[1530]: lxc_health: Gained IPv6LL Apr 16 04:59:18.697147 kubelet[2809]: E0416 04:59:18.697049 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:19.699148 kubelet[2809]: E0416 04:59:19.698235 2809 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:59:22.302813 sshd[4555]: Connection closed by 10.0.0.1 port 54234 Apr 16 04:59:22.303347 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Apr 16 04:59:22.306355 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:54234.service: Deactivated successfully. Apr 16 04:59:22.308008 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 04:59:22.309059 systemd-logind[1544]: Session 26 logged out. Waiting for processes to exit. Apr 16 04:59:22.309911 systemd-logind[1544]: Removed session 26.